Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Various reasons (Score 1) 34

The biggest will be Intel's aversion to QA. Their reputation has been savaged and the latest very poor performance benchmarks for the newest processors will not convince anyone Intel has what it takes.

I have no idea whether they could feed into AI the designs for historic processors from the Pentium up, along with bugs found in the designs, to see if there's any pattern to the defects, a weakness in how they explore problems.

But, frankly, at this point, they need to beef up QA fast and they need to spot poor design choices faster.

Others have mentioned poor R&D. Intel's strategy has always been "cast wide and filter", but this means everything is looked at on the cheap, with less manpower and less cash.

Their experiment with on-board memory was interesting, but you can't do that easily if all your real-estate is taken up with cores and hyperthreading capability. It looks like they saw an idea without understanding why it worked.

There's ways they could get memory closer, which is what gives you the performance gain, without it being strictly on-die, and that might be better.

I'm seeing lots of dual CPU servers, but the more you share L2 cache, the less cache each process can use. A 4-CPU or 8-CPU box should outperform a dual CPU box where each CPU has double the cores. Back when SMP ruled, you had 16-CPU machines. Improving Multi-CPU support would improve servers.

Modern busses can support multiple masters. PCI Express was supporting this as of 2.1, IIRC. There's no obvious reason each core needs to be the same, so long as there's a way to avoid instructions for one going to another.

If you had integer-only CPUs (so the FPU real-estate is used for more L1 and L2 cache), then integer-only work (the bulk of the kernel, for example) would presumably be much much faster, as there'd be less fetching from slower resources.

Comment What, wait? (Score 1, Insightful) 26

They're going to use software that hallucinates frequently and can be duped into supplying what, in this case, would be highly sensitive top secret data to any adversary that knows hexadecimal, as opposed to the Big Data models (including "terrorist stock markets" proposed by Admiral Poyndexter)?

One can conclude, from this, two things:

1. Previous Big Data analysis systems are proving so unreliable that an AI which produces nonsense most of the time is an upgrade.

But the intelligence services have relied heavily on Big Data for at least the last two, maybe three, decades. If these systems are actually that bad, then security in Europe was effectively non-existent.

Given how much tax money goes into the security and defence industries, I think we're owed an explanation or a refund, if things really were that bad. If they weren't, then the DoD is downgrading security at vast expense, whilst placing the nation's defence secrets at extreme risk, and I think that should also warrant a bit of an explanation.

2. The USG doesn't believe the Pentagon to be a serious, functional, organisation and therefore it doesn't matter that it's buying a hallucinating security risk.

Submission + - Researchers say AI transcription tool used in hospitals invents things (apnews.com)

AmiMoJo writes: Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.

Comment Re:Not convinced. (Score 1) 98

It heats it up by half as much as it would do if it came from the sun.

When coming from the sun, it is at absorbable wavelengths in both directions through the atmosphere and 100% of it interacts with the ground.

When transmitted at transparent wavelengths, then radiated as heat when doing work, it only interacts with the atmosphere in one direction and, as the thermal emissions are omnidirectional, only half will interact with the ground.

Since there's only half the absorption, the net effect is a significant cooling of the Earth.

Further, that's a substantial amount of energy, greatly reducing the need for fossil fuel power sources. This should reduce CO2 output, which will also help.

Comment I think APNIC never bothered to look (Score 2) 213

1. Autoconfiguration
2. Anycasting
3. Extensible headers
4. Prefix-driven routing
5. Simplified multicast
6. Simplified word-aligned headers
7. Wider labels for better handling of intserv, diffserv, and qos
8. ICMPv4 router discovery and redirect, and ARP were replaced by unified simplified protocol
9. Transparent routing protocols which restricted visibility of the topology of internal networks to external observers (originally devised by Telebit)

Removed from IPv6, but part of the original design so all technically reintroducible without breaking anything:

1. Automatic fragmentation elimination
2. Transparent Mobile IP
3. Mandatory encryption

That's an awful lot of features IPv6 has that 4 doesn't have and cannot ever have. You'll notice I don't mention address space. Because it wasn't ever really relevant to IPv6. It may have been the initial reason, but the bulk of the address is taken up with routing information, not machine ID.

The reason this was done was to support transparent Mobile IP. Your actual address was the end bit and stayed constant. If you moved between networks, then the routing data changed but your actual IP address, the end bit of the IPv6 address, stayed the same. The routers would automatically handle your migration, since your machine ID was unique in the Internet.

This could be done securely because there was, at that stage, mandatory encryption which meant routers could authenticate that the machine claiming the new network really was you.

Yes, both these got eliminated, but the way the addresses worked stayed exactly the same. The prefix is the route, the suffix is the real address, and that bit isn't significantly bigger. But the suffix is supposed to be unique on the Internet.

As for the routing, everything was done in 2-byte chunks. So you never had to handle entire IPv6 addresses on routers or do full matches. The absolute most you ever needed to inspect were the two bytes above, at, and below your router's position in the network.

And, in a strictly hierarchical design, you could eliminate the second of those.

So your router tables, if the software was correctly written, would be equal in size to IPv4 router tables, or only slightly larger, and subnetting was a breeze if your network was properly configured.

I was on IPv6 on September 27th, 1996. I ran 10 tunnels from a Linux 2.0.20 box with the experimental IPv6 patches. (No, I don't give a damn the spec changed later on, any more than any reader here cares which particular revision of IPv4 they're on. It was IPv6 and that's the end of the matter.)

I was using a mix of RIPv6 and static tunnels, and later on a very early IPv6-aware Apache server for wide-area testing. There were also third-party IPv6 stacks for Windows and Solaris, which I used to do local testing.

For those interested, that should be more than sufficient to look up the RIPE entry in the 6Bone.

Back then, the US' Navy Research Labs provided a library which could take an IPv4 or IPv6 connection and hid the details from the app. They soon abandoned it, dunno why, it seemed a good idea.

Had everything been transparent, I doubt we'd be in this mess today, simply because no user would know or care what they used, and no app would, either. It would all be invisible, which is how it should be.

And how, for the most part, it is, on mobile phone networks, where IPv6 is used a lot.

Comment Re:Easy to say, but not practically fixable... (Score 2) 73

True, but AMD and Intel firing large numbers of QA staff from their chip lines and trying to accelerate development to stay ahead of the game isn't helping matters.

This was a gamble that was always doomed to lead to spectacular crashes, it was merely a case of how many and when.

Submission + - Linus Torvalds slams hardware security defects (phoronix.com)

jd writes: Linus Torvalds is not a happy camper and is condemning hardware vendors for poor security and the plethora of actual and theoretical attacks, especially as some of the new features being added impact the workarounds. These workarounds are now getting very expensive, CPU-wise.

TFA quotes Linus Torvalds:

"Honestly, I'm pretty damn fed up with buggy hardware and completely theoretical attacks that have never actually shown themselves to be used in practice.

So I think this time we push back on the hardware people and tell them it's *THEIR* damn problem, and if they can't even be bothered to say yay-or-nay, we just sit tight.

Because dammit, let's put the onus on where the blame lies, and not just take any random shit from bad hardware and say "oh, but it *might* be a problem".

Linus"

Comment Re:I went out there about 10 years ago (Score 1) 183

Plenty of places develop photos, but it depends on the quality you want.

Professional photographers like film because high end film, particularly high end medium and high end large, has a higher effective resolution and higher dynamic range than any affordable digital camera, so there are high-end developers to cater for this crowd. In the UK, Analogue Wonderland is a good starting place.

But Walmart and other cheapo stores have cheapo film development sites, usually off-campus so it can take a few days. However, for most people, low-end film in low-end cameras will produce worse images than digital.

Comment Not convinced. (Score 1) 98

The energy needed to produce diamond dust is very high, and if you add more heat than you remove, you've achieved nothing.

A large array of movable modular solar collectors which you can move would seem better. Collect the energy over a very large area and beam it to somewhere useful. Earth might be good.

Because you can move them, you can control the level and location of shade. By beaming the energy to Earth, you reduce the need to generate energy on Earth. And solar cells in space should collect far more energy than on the ground, as there's no scattering or absorption by the atmosphere.

This would be comparatively cheap, over only a relatively short time, be quicker to construct (since it can be constructed in modules rather than at scale) and would have fewer unintended consequences.

Comment Re:No name brand loyalty (Score 1) 54

Just as there's a huge nutritional difference between the low-grade hamburgers that most fast food places produce and the sort of high-grade product that is made at better restaurants, there is a huge difference in memory use, CPU use, and efficacy of antivirus software.

The first problem is that you're assuming the name Is the product, that there is only one solution and everyone uses it, whereas in fact the name merely gives a very generalised classification and there are many solutions of differing quality and differing system demands.

The second problem is that you're assuming that "safe practices" are sufficient. They haven't been, for a very long time. The requirements for safe practices are actually far more stringent, because the threat has substantially changed.

Comment Trade wars (Score 1) 78

The only successful trade war in history was "Trade Wars 2000" on the BBS circuit.

Every other has simply diminished both sides, essentially baking them both out the marketplace.

It'll be worse this time, as the Internet, along with corporate failures to scrub re-sold disks, will simply lead to a faster, more corrupt, transfer of skills to both China (who has now no reason not to use its vast wealth as leverage) and third-parties interested in screwing both nations over.

America has botched its chip manufacturing plant, and one place in Europe makes the gear for producing masks. That means China only has to steal data from one place for mask tech, and there's maybe two or three interesting plants in Taiwan they've probably already taken designs from.

That shouldn't be hard. It's probably already taken, and it wouldn't surprise me if China has been secretly upgrading their domestic tech to replace all external dependencies on the bleeding edge.

America can't win by restricting sales, it can only win by doing better. And Intel, AMD, and Qualcomm have suffered from disastrous PR resulting from QA cutbacks - same error Boeing made, with similar results.

In short, the USG should be ordering its critical industries to improve quality, and be spending money on that, not on controlling trade.

Because if China can make more reliable, more robust chips, those chips will be bought over those made in the US, even by US firms. Companies don't give a rat's about national loyalty, they care about profit and risk, and they're going to maximise the former and minimise the latter.

All China has to do is sell them unbranded through a third party nation that is fine to trade with. Same way governments have bypassed laws for centuries.

Slashdot Top Deals

Truly simple systems... require infinite testing. -- Norman Augustine

Working...