Comment Chiplets? (Score 5, Interesting) 59
Are Meteor Lake "Tiles" just Intel branding for chiplet designs?
Are Meteor Lake "Tiles" just Intel branding for chiplet designs?
Oh the fees are so much worse than "Regional Sports Fee" which usually is actually paid to NBC or someone.
The fees that are really bad are "Regulatory Compliance Fee" aka... "We have to follow the law, so we're going to add it onto the price, as if it's not part of the cost of service.
Ignoring for a moment the fact that not a single UK citizen lives in the Arctic... there's also the fact that there are hybrid heat pumps now. You can use gas or resistance heating to raise the incoming air temperature just enough for the heat pump to be effective. For example the Mitsubishi Hyperheat model is designed to be 100% effective down to -17 C and 85% effective down to -25 C. That would be sufficient for Anchorage Alaska.
At 85% heating capacity you might need a sweater indoors and could always employ a space heater or radiant heater to augment.
Another option if you're in an EXTREME environment like "the arctic" is a GHP or Geothermal Heat Pump where you bury or drill a heat exchanger loop and use ground temperature rather than air temperature which remains more mild throughout the year.
To quote a study on the feasibility of Geothermal heat pumps in the arctic:
Owner Satisfaction
The surveyed homeowners reported that their heat pumps are generally meeting their expectations.
Many respondents noted substantial energy savings over using oil heat. Additionally, homeowners felt
that the low maintenance requirements of the heat pump are an advantage, likening it to a refrigerator.
Other noted advantages include not relying on the fluctuating prices of fuel, no onâsite combustion, and
no onâsite fuel storage.
Discussion
The success of surveyed systems in the Fairbanks area indicates that Interior Alaska has the potential for
efficient GSHP operation,
If it works in Fairbanks at 64 degrees North, then it'll work almost anywhere. We're talking a global population north of the 60th parallel of 13 million vs 6 billion total.
It's not pouring money into private industry. It's not a subsidy it was just a fixed price contract. But the fixed price contracts pre-covid and post-covid no longer make any sense due to unexpected once-in-a-century inflation.
It was a badly designed subsidy because it was trying to avoid being a subsidy. The US solution worked out far better. In the US every year the US Dept of Energy sets a benchmark price for fossil fuel produced electricity. Then the Wind providers get paid the difference. It's actually a subsidy. As the cost of wind drops, the subsidy drops. There was speculation a little while ago that when natural gas was high and wind was still dropping that the subsidy would naturally come to an end. But I think the last couple years of instability in the energy markets have pushed that out a few more years.
It's not a subsidy it's a fixed price contract that was intended to provide stability to investors to finance projects.
The problem is the auctions were so long ago that the fixed price contract with the national utility is way below the market price and the price of construction.
It's actually a negative subsidy. If the market price for wind energy goes over the fixed price contract, then the wind farm has to pay the govt the difference. If the price for wind dropped below the market rate then the govt would subsidize the wind farm.
If this had worked as intended, then taxpayers would have been protected and wind project investors would have the assurances to proceed. Nobody makes a lot of money, but nobody loses a lot of money. But the massive, unexpected spike in global inflation because of a pandemic threw labor costs completely out of whack and so the best economical choice for wind projects is to simply cancel them than rather than pay hugely inflated labor costs to build the farm and then end up paying taxpayers on top of that for the inflated energy market rates for the theoretical insurance that taxpayers were going to provide. There's no reason to buy insurance for something you KNOW you're going to lose money on.
Think of it like a crop. The government says "We'll buy your entire crop for $1/kg of grain". The farmer says "great" and starts planting. But then the price of labor skyrockets and the cost to actually harvest the grain goes up to $2/kg of grain. Now the farmer is in a predicament. Every kg of grain they harvest will lose them $1. So there's no reason to bother harvesting the grain. The govt contract which was designed to ensure the farmer had a sufficient incentive to plant the crop--has backfired and the farmer is trapped in a horrible deal that incentivizes them to just let the crop rot rather than sell the crop to the govt (or sell it on the market and pay the govt the difference).
Khronos was tasked with delivering a replacement for VRML/X3D and they did that. It's very narrowly defined and does that simple task very well.
Pixar/Industrial Light and Magic/Dreamworks/Autodesk/Nvidia etc created a standard to handle feature film quality visual effects. USD handles that.
They are two very very different standards with wildly different scopes. This isn't an effort to kill glTF except in so far as companies saying "hey, USD can handle billions of objects and terabytes of geometry and textures that will choke glTF. USD is overkill and this wasn't why it was developed but.... why not just use it everywhere?"
glTF is very simplistic. USD allows nearly infinite flexibility. It can handle scenes as simple as a single model for an AR web experience or as complicated as a photorealistic scene encompassing the entirety of New York City modeled down to the screws on a fire hydrant.
It's like the difference between a JPEG and a PDF. They're both useful for different things. You might deliver a glTF but you can't use a glTF as a universal scene descriptor for numerous reasons nor was it intended to be usable as an interchange format for something like a Marvel blockbuster VFX scene.
USD works on a small scale great because it's already been optimized to handle mind bogglingly large and complex scenes. It's hugely overkill but very efficient. It natively supports streaming assets, levels of detail, plugins for defining arbitrary file formats like openVDB for volumetric effects like smoke or clouds. USD lets you just arbitrarily define scene objects and assign metadata and properties to it and then leaves it up to the renderer to figure out what was intended. gltf Assumes specific shaders and specific geometry loading and specific texture formats etc. That's great for an interchangeable file format for lightweight web scenes but can't handle a Pixar film.
Compared to FBX which is the previous standard for 3D assets, USD is extremely lightweight.
You should care because the whole 3D industry has already declared USD the standard (because it's really good). So if you want to deal with 3D assets... they're going to be USD.
There's a famous quote about politicians, and I can't remember it exactly or where I heard it but it goes something like...
First, they deny there's a problem.
Then they admit there's a problem but say it's too expensive or difficult to solve.
Finally, they agree it should have been solved but now it's too late.
The problem with the "Save the Planet' Denialism is that previously when the earth didn't have ice caps we had hundreds of thousands or millions of years for species to adapt.
Evolution on the scale of decades is less successful.
Ahhh yes, back in the good old days when "Global Warming was over!"
Damn it Ocean, you were supposed to have covered up our criminally stupid manipulation of statistics forever!
If you're bug-compatible and compiling open-source software though you by definition can't say that it's "illegally distributed source code".
If I create an ansible recipe that deploys an exact collection of software to a specific kernel-built server then by definition a separate ansible recipe will produce a bit for bit identical server if they install the exact same tooling. How would they know if you used their playbook or one that happens to be identical?
Again, I'm really confused what spec you're reading. Or how you can so spectacularly misread the clearly written spec.
6.1 Voltage Rating The North American Charging Standard exists in both a 500V rated configuration and a 1,000V rated configuration. The 1,000V version is mechanically backwards compatible (i.e. 500V inlets can mate with 1,000V connectors and 500V connectors can mate with 1,000V inlets).
6.2 Current Rating The North American Charging Standard shall specify no maximum current rating. [...]
Tesla has successfully operated the North American Charging Standard above 900A continuously with a non-liquid cooled vehicle inlet.
Tesla v3 superchargers are 480v. And 900A is way in excess of v3 superchargers.
If I have to walk 400 miles regularly my life might technically be a little longer, but I'll spend a good portion of it walking.
"The operator uses any provided method to indicate the desire to begin a charge session."
This is a spectacularly gross misrepresentation of the spec. That's only talking about the software to authenticate a charge. E.g. if you want a credit card to start it, whether you want it coin operated like a gumball machine, whether you just want it to start regardless of payment for free.
The actual standard:
4.5.1 For DC charging, communication between the EV and EVSE shall be power line communication over the control pilot line as depicted in DIN 70121.
4.5.2 The North American Charging Standard is compatible with âoeplug and chargeâ as defined in ISO-15118.
DIN 70121 is very clearly defined. ISO-15118 is clearly defined.
The NACS standard clearly defines the NACS Protocol. It's DIN 70121 and ISO-15118. (Aka CCS).
When speculation has done its worst, two plus two still equals four. -- S. Johnson