Comment Re:Yay, genocide! (Score 1) 150
"No, they have gone to absurd lengths to avoid civilian casualties."
It's telling you consider an attempt to avoid killing people might be deemed 'absurd'.
The United Nations seem to disagree with you.
"No, they have gone to absurd lengths to avoid civilian casualties."
It's telling you consider an attempt to avoid killing people might be deemed 'absurd'.
The United Nations seem to disagree with you.
Britbox is my favourite, for 1970s, 1980s British sci-fi.
A doctor provides a service and charges a fee.
A merchant buys widgets at a low price and sells them at a higher prices; makes a profit.
I don’t think there’s a problem with the former, within reason. The latter case is less noble, perhaps, but if it’s providing a service (importing goods from overseas, say) it’s still a win-win for both side# of the trade.
However, it’s easier to screw people on the way to making a profit than it is to charging an overblown fee for a service. There’s no great skill in buy-low sell-high. An overcharging doctor still has to graduate medical school first.
What’s my point? We all need to earn a wage to live. Don’t blame the people getting up each day to earn a wage by providing a service. Blame those who are trying to make a buck by gross profiteering.
Probably different projects though. Not sure what projects the US has but in the UK you have a number of types of warehouse:
- Fulfilment centres: Where all your normal run of the mill Amazon orders are picked and packed.
- Distribution centres: Centralised locations where orders are routed through; i.e. you might have 20 fulfilment centres sending packages to a distribution centre which then amalgamates them onto individual trucks (or planes) destined to further away locations. So imagine 20 trucks bring packages in from the closest fulfilment centres, those packages are then reorganised such that maybe 4 trucks worth are amalgamated to go down to London, 4 upto Scotland, 2 the South West and so on, whilst the remaining ones might go to local delivery centres because they're for local parcels.
- Delivery stations: These are the last mile stations, where parcels are offloaded from trucks onto local delivery vans.
- AMXL Warehouses: This is Amazon's extra large project for oversized goods. If you buy something like a washing machine, or fridge on Amazon it's picked and dispatched through AMXL centres. They're kitted out with equipment for shifting heavy goods.
- Prime Now Warehouses: These are local all in one time centres that have a smaller selection of goods people want quickly, i.e. typically groceries, the latest video games, batteries that are picked, packed, and dispatched all from one place for same day deliveries within 2 hours to people local enough to them to offer that.
- Returns centres: These handle returns unsurprisingly.
I'm sure there are other types I'm not aware of.
It's not uncommon for Amazon to build clusters; i.e. AMXL, Distribution, and Delivery all next to each other; some are even interconnected so parcels destined for deliveries local to Distribution Centres for example might have conveyors straight from the DC into the Delivery station so they can be routed straight through without packing and unpacking trucks.
So there is method to the madness of them building warehouses next to each other. On the outside they all look the same on the inside they're all doing completely different things.
I can kind of understand the use case for this, the problem is that for serverless code execution cloud providers are currently typically using containers to deliver FaaS, the problem is even in the best case you still have sometimes unacceptable cold start times if no instance of the execution environment is cached and available to serve.
This means the promise of cloud based hyper-scalability through FaaS for web apps has some real problems, on both ends of the scale:
- At the bottom end, low rates of concurrent execution, FaaS suffers from the cold start problem, you can't realistically serve backend requests for a front end site using FaaS in this scenario because there's never an instance ready to serve the request, so each time a user visits the site and a call is made to your FaaS function the cold start means you could see response times as bad as on the order of seconds, that's too long for user interaction.
- At the middle range the cold start problem goes away some degree because you have enough frequency of requests to your service that there's always a warm instance to serve the request and don't have the cold start problem for all your users. You still have it for some of your users however as demand ebbs and flows; your functions still have to scale up and scale down appropriately and so it becomes a headache making sure you don't under or overprovision (note you can have this problem at the low end too using things like provisioned concurrency for AWS lambdas or Azure Premium functions).
- At the high end you have different issues, the promise of FaaS, serverless, and infinite scalability vanishes. AWS only allows 3000 concurrent Lambda executions in even their largest data centres for example, that's a reasonable load but I've worked on services where you need to go to say 50,000 concurrent requests and so AWS Lambdas just can't do it - your users get sporadic HTTP 429 errors as it throttles your requests. AWS can up this limit for you, but at that point again you're really fudging a solution into a cloud architecture that's straining to cope. Amazon's limits are because for all the hype, even Amazon having to fire up 3000 containers for a number of customers at once can become a strain on their capacity; forcing people to explicitly ask for a higher cap means they can better do capacity planning for customers on any given region.
So it's not entirely surprising therefore there's a push for a more granular type of container; one that's faster to spin up, whilst still be isolated from other execution environments, and has less overhead than even Linux containers. Such a thing is needed to get us closer to that goal of a cloud environment that can support both small and large web app back ends alike, because right now the issues above mean that FaaS is often limited to other types of workflow, like background order processing and that kind of thing.
I imagine therefore, that this is what this solution is oriented towards. The problem is it's not language agnostic, meaning you'd need a similar solution for other supported languages. Ideally you need an execution environment to be able to guarantee it'll spin up, process and respond to a request on the order of mere milliseconds in an isolated execution environment.
Disclaimer: I 100% agree with you around the usage of JS, I use it professionally day in day out at a FAANG scale company right now, but have worked with C, C++, Java, C#, PHP, and Python professionally in the past. Companies shouldn't be using JS like they are, it's genuinely leading to poorer quality code, but employers are being baited in by the hoards of cheap young code camp students getting taught JS. We also use TS, but unless you understand OO and types properly these devs just end up treating TS like JS; the second they encounter the need for a complex type they just resort to the any type and go back to the JS way of doing things and it rapidly becomes a clusterfuck once more. By the time you've taught them to code properly you might as well have just used a more appropriate language like Java, C#, Go, Rust, C++ or similar and trained people from the ground up yourself through an apprenticeship programme, so it's really a false economy using JS because of the "ease of use", or "cheap labour".
Probably:
https://earthquaketrack.com/r/...
"North Atlantic Ocean has had: (M1.5 or greater)
6 earthquakes in the past 24 hours
51 earthquakes in the past 7 days
190 earthquakes in the past 30 days
3,109 earthquakes in the past 365 days"
Interestingly that page shows there was a magnitude 5.3 in the last 24 hours alone along the mid-Atlantic ridge. Unfortunately the page is a bit shit to navigate so I can't find an easy way of seeing the data for the 15th.
I think the problem is though that these things happen all the time, whether they cause any surface water movement though is dependent entirely on very unique circumstances of each one. You could have a magnitude 6 that no one notices because it had no real impact, but a magnitude 3 that just happens to trigger a massive underwater landslide resulting in a tsunami.
Having shore dived the Caribbean regularly, sometimes doing 5 dives a day and knowing that the waves can vary within a range of fuck all to a metre or so high in the space of just a few hours separate from standard tidal movements if the winds pick up. I'd wager there are plenty of 10cm tsunamis but the vast majority will simply be lost in natural variation of the waves driven by weather. This one was probably only noticed simply because someone was looking at tidal variation at the time of a big, internationally well publicised tsunami and has as a result gotten themselves a bit overexcited theorising without really thinking it through.
Eruption happened at 04:14 UTC, Tsunami hit Japan at 14:14 UTC. Three hours before that would be 11:14 UTC, but the shockwave was travelling at the speed of sound, which is around 761mph at sea level. The Caribbean is at least 7,000 miles away, so at the speed of sound it would take around 9hrs 30mins to get there, which would mean 13:44 UTC at the absolute earliest - only 30mins before the Japanese tsunami, not 3 hours before. Furthermore, if the shockwave itself was causing tsunamis this doesn't explain why the first Japanese tsunami happened after the first Caribbean one; you'd have expected shockwave driven tsunamis to appear in Japan before the Caribbean ones regardless, even if the main tsunami driven by the displacement of water at the site of the eruption itself took longer to arrive.
I appreciate it's possible the suggestion is that the shockwave passed through the core of the planet or similar rather than around the surface, but I'm not convinced they're not simply confusing correlation for causation here. I suspect more likely what happened is that movements within the earths core or crust that triggered the Tonga eruption also triggered a minor eruption or shifting of plates causing an underwater landslide somewhere in the Atlantic around the same time as the eruption near Tonga resulting in the minor tsunami seen in the Caribbean. This would be a far more plausible explanation because it would actually be physically possible in the timeframes given for starters.
It is, but it's worth noting as majestic as this octopus is that transparency like this is not uncommon in the ocean. Many many species of creature start their lives out transparent in the ocean; even many octopus that later turn out to be non-transparent. Even many baby moray eels start out transparent.
If you like that octopus look up Black Water Photography:
https://www.google.com/search?...
It's the practice of just taking underwater photographs in dark open water in the middle of the night of typically juvenile stuff like this using macro photography; because the ocean is absolutely full of this absolutely majestical stuff at night. If anyone ever decides to do it though my one piece of advice is even if you don't normally when you dive, wear a hood - there's nothing more creepy than feeling this shit crawling all over your head and feeling like it's gone into your ears at times - especially the creepy water millipede like shit that looks like it runs through water with hundreds of legs, stuff of nightmares!
It really is an alien world in the ocean; there's life there that makes things imagined in sci-fi look utterly tame and unimaginative in comparison.
I don't think so; typically medicines are always updated post approval when they're in the open market and new side effects are found because realistically if you're talking about a 1 in 500,000 issue the ability to even get 500,000 test subjects for most medicines is flat out impossible because a lot of the time you're talking about medicines for conditions that there just aren't even that many people suffering from it at any given time. The only reason it's making headlines this time is because we're talking about medicines that everyone is getting, so those rare case are, in absolute numbers, more obvious.
If you have a vaccine for something that isn't given as broadly, it's possible you'd simply never see such rare outcomes even though they're theoretically possible. So this isn't really a function of lack of testing prior to release as it is business as usual making headlines because it's relevant to everyone. If for example rabies, or Japanese encephalitis vaccines had side effects like this you wouldn't expect the UK's medicines regulator to even notice because the rarity with which those vaccines are given out in the UK is small, but that doesn't mean that rare side effects not found during testing like this aren't a possibility.
IMO it's only really an issue when for example as with the AZ vaccine the British government tried to bury it out of nationalist pride - first by saying it wasn't a real issue and Europe as just bitter about Brexit, then lying and saying it's only a 1 in 1 million chance, before finally admitting a few weeks back it's a 1 in 60,000 chance of getting a blood clot and effectively, in real terms, phasing out the AZ vaccine in the UK because no one else after that point is now getting it in the UK other than for second doses.
So all we're really doing here is seeing everything happen at high speed - whereas with many vaccines or medicines it might take many years before millions of people are treated with them for enough cases of a rare side effect to be noticed, here we're just seeing it in a much shorter time frame - that's not because rushing it has made things less safe, it's just made issues that are typically noticed over years or even decades in classically vetted medicines get noticed within months instead because of the sheer numbers involved.
"I can't let misinformation about plants to propagate!"
I can't tell if this is some kind of botanist joke or not
God the longer you open source zealots carry on this crusade against something that happened 20 years ago the more and more you look like complete and utter bitter losers.
It's really not a good look, you need to get over it. Gates grew up and became a better person, the fact you lot can't just shows how utterly pathetic and hopeless you are.
Weird, maybe it's a regional thing? I bought Halo Wars 2 via the Windows Store in the UK when it was on sale a couple of years ago.
Failing that I think it's on Xbox Game Pass as a cross play game, you could use one of the £1 deals and play through it in a month I guess
I'm not sure when you last looked for Halo Wars but it's most certainly available on Steam:
https://store.steampowered.com...
You can also get Halo Wars 2 on the Windows Store for PC too so I don't know why you think that's not the case.
It's been available for Windows for quite some years without simply being a code unlock, in fact, Halo Wars 2 was released simultaneously on Xbox and Windows supporting cooperative cross play from day one, in that respect it's never not been available on Windows post-release.
/* Halley */ (Halley's comment.)