Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re: More class warfare... (Score 1) 302

I don't think anyone knowledgeable in the subject considers CO2 different based on point if origin. For better or worse, the American dream of "cars, house, white picket fence etc" now covers the world. It doesn't take much math to realize that much of the world won't achieve it ever.

The world is saying: "Give us a new American dream with much less CO2." While there are ideas, on the whole no one knows how.

It's likely Americans will either be the first to figure it out, or the drought vulnerable areas will stop producing food.

Psychologically, this is one of those figure this out or suffer the consequences later situations. Most people aren't good at them. There's alot of "do we really have to?" sentiment on display.

Comment Re: Not surprised (Score 4, Interesting) 115

The sad truth is that much of the "reliable" copper emergency infrastructure has been switched to IP and fiber. Not really sure that the copper phone infrastructure is still as reliable as it used to be.

The phone carriers don't prioritize emergency communications. During Katrina, it was the ham radio operators that kept communications running.

With fires and floods, California needs reliable infrastructure too.

Comment Re: Cost? (Score 1) 155

Yes. Once the 386 arrived, the cost thing was why Linux took off. Anyone could run Linux. It was very expensive to get a unix or xenix license with the hardware to run it.

It wasn't long before the unix computers were undesirable, because they didn't run Linux. Also, the workstations were only 2 to 3 years faster than pc hardware. So you needed an application that required the fastest hardware to justify the 4 to 10 times cost increase, and the ongoing budget to stay in the arms race. Otherwise after a couple of years, the pc/Linux solution would be faster at a fraction of the cost.

When the web arrived, some companies were able to show 64-bit hardware was much faster (Altivista), but that was the last gasp from the workstations market.

Comment Re:Doesn't make a lot of sense to me (Score 1) 143

I never understood that aspect. Some of the sums where gigantic. 50,000 pounds sterling. Where do you put 50,000 pounds of loose bills and not have it show up on security camera or similar? And what was the turn-over in the office such that amount could plausibly go missing?

Many people must have deliberately switched off their brains.

Also, with at least US style disclosure rules, a good legal team would go through each transaction until they found some that looked very suspicious and see if they could prove they didn't happen. Unfortunately, I think the people involved were poor and unable to afford that kind of legal representation.

Comment Re:Maybe Rust would be a Much Better Choice (Score 1) 139

why isn't the real-time control system a headless program or service and the GUI (that depends on dubious libraries) just manages it via IPC of some kind?

My instinct would be to do exactly as you suggest. Unfortunately, I didn't write the program, and this is likely one of the big design mistakes. Essentially, the real-time bits are blended in with the UI bits, and the result is a bit of a milkshake. It's difficult to unscramble at this point.

I can't fault the original developer to badly either. Microsoft at one point encouraged threading without really thinking through all of the bad aspects of it. They basically said, here is a create thread function. Have fun!

Comment Re:Maybe Rust would be a Much Better Choice (Score 5, Insightful) 139

Getting the Linux kernel to compile using a C++ compiler is a considerably different activity than switching the language of the kernel to C++.

I'm in the middle of trying to figure out how to fix bugs in an old C++ MFC program that does real-time communications. These are my current problems with C++:

1. Estimating the performance of C++ code. For example, what does a simple statement like A = B do? In C, you can kind of answer that question. In C++, anything can happen. I have debugged libraries that specifically say A=B will not leak memory, and it does.

2. Which version of C++? The C++ standards committee has been really busy lately ...

3. Unlike the preprocessor, the template generator does not dump in the intermediate C++ code. One of the nice features of the C preprocessor was that if you didn't understand what it was doing, it could dump its output as C code. The MS C++ compiler generates a small book of error messages if I goof up type indications when using templates, and those messages don't clearly suggest a fix.

4. If you are trying to do solid, multi-threaded, real-time code, C++ doesn't help you. It doesn't stop you. But it doesn't help you either.

5. If you are still trying to do multi-threaded code, and follow the entire shared_pointer (not thread safe), atomic_shared_pointer (slow, possibly locking), hazard pointer (lock-free), fast shared pointer over hazard pointer discussion (multi-threaded and lock-free), then you realize two things:
(a) this is a new way to do garbage collection (C#/Java), and
(b) the new fast hazard / shared pointers are too new to be in the C++ library!

5. It's almost impossible to write readable C++ code without a style guide. The style guide will always be out-of-date. How do you enforce the style guide on a project of the size, scope and age of Linux? or any other long-lived project?

6. Correctness: how do you analyze code correctness in C++?
(a) It is not obvious what any given statement does.
(b) It is not obvious what the maximum execution time of a block of code is.
(c) It is not obvious that any given pointer is valid.
(d) It is not obvious that smart pointer accesses are wait-free.
(e) Even if you get all of your pointers to be valid, it is not obvious that libraries won't create indeterminate state or suddenly call exit(). This is a real problem. We have user complaints like "I clicked on this button to enter a number and the program disappeared." We traced it to a library, which does something, which does something, which has an error condition, and terminates in an exit(). Our program is a real-time control system. It can't give up and call exit().
(f) For long-lived applications, C++ works great as long as you can guarantee the program can handle all of the exceptions that it can throw. Does C++ help you do that? Can I even get a listing of all of the unhandled exceptions? or which exceptions are handled where?
(g) Real-time control systems need to execute in finite memory. Much of the C++ library assumes that an unlimited heap is available.

7. Would it be too much to ask for a safe array type? Something that handles a[-1] gracefully? My PLC structured text compiler throws error messages if I even enter such a thing. It will never cause a GPF on an invalid array access. What standard data type in C++ does something similar?

8. Can we have a safe multi-threaded variable length array type, please?

Is C++ even the correct language for a large program like the Linux kernel?

Comment Re:What a hypocritical asshole (Score 1) 293

To clarify:
"Some of the automated thesis generators "

should read
"Some of the automatic index generators automatically remove simple words, like "is". However, certain words of topic-specific general discussion should not appear in the index, and this isn't handled automatically correctly."

I doubt an automated citation program would generate better cites for a PhD thesis than an automated index generator generates indexes.

Comment Re:What a hypocritical asshole (Score 1) 293

Yes. It would be incorrect to site small fragments like that. The activity is completely pointless. Firstly, the details of the modern formulation are frequently slightly different than from the original paper, often because nomenclature in the field has improved with time. So the site would be to a subsequent paper likely discussing something else, and not to the original paper which is what defined the reference. So the reference wouldn't be useful. Secondly, referencing every short sentence / fragment would break the reference system. If you did the way a computer would do it, then you would wind up with a reference section full of off-topic references. For example: a variation of the sentence "This is a thesis in [subject area]" appears in almost every thesis, and there is no academic reason to site it. The sentence likely appears in the style guide for the college / university, so why cite it?

This is also the argument against using programs to automatically generate indexes. They generate results like "is" is used on every page of the thesis. Some of the automated thesis generators automatically

It is also incorrect to reference Wikipedia. All references in a PhD or academic paper should be to other enduring materials that can't be easily changed.

Further, a foundational work will likely have a pile of quotes sitting in Wikipedia. Then who did the copying? Did the text in the thesis predate the page in Wikipedia? for many of the old fossils at universities, it likely did. Only the younger faculty have the issue of verbatim quoting from Wikipedia.

Lastly, ultimately the document is for the reader. The cites are there to help the reader cross-check the results, or look up materials for further understanding. Cites are not designed to provide text to turn-it-in.com to generate new AI engines, or to create a generation of students adept at feeding computers with the correct answer. I sincerely hope that we have not lost the concept that the goal is the spread of scholarly knowledge.

Comment Re:Board overplayed their hand (Score 4, Interesting) 100

If the board thinks the CEO is being less than truthful, then they pretty much have no choice. If the CEO isn't being candid with the board, then there are no governance controls in the company. A CEO can get a do vastly more damage than a rogue employee. Ask ARM about losing control of their China operations.

HP had something similar with Mark Hurd. HP was big enough that it eventually overcame the turbulence. Smaller companies don't survive.

It is much easier to destroy a company from the top than from the bottom. This could be a lose-lose situation for Open AI. Losing the CEO will destroy the company, and keeping the CEO will also destroy the company.

Depending on what the transgression is, the board can discipline the CEO without firing them. Open AI should have done this before Sam's departure. At this point, bringing back Sam Altman will likely mean the board abandoning its oversight role which will end badly.

Comment Re:SSH is less secure as a practical matter (Score 3, Interesting) 95

"Trust on first use" is intended for applications where both computers are on the same local "secure" network.

If memory servers, Amazon AWS doesn't share the server's SSH keys on first-use for this reason.

And most security can be defeated by users ... :(

Comment Compiler solution? (Score 4, Interesting) 95

This requires one of the two computers to have an infrequent hardware error. If one assumes a low error rate (1 in a billion) and a multi-core processor, then it should be possible to fix this class of bugs in software.

Theoretically, it should be possible to create a compiler that takes any given encryption algorithm, compiles it in two different heterogeneous ways, confirms the calculation, and only sends correct information out over the wire. This is already done in some safety sensitive applications, often with two processors to run each of the two different programs.

For software like SSH, the software would need to move to coding security sensitive algorithms with language / compiler / toolchain that automatically confirms the results before sending them out.

Comment Re:Yes, but... (Score 2) 132

Which do you trust more, the government or Comcast?

Regulation is a double-edged sword. But when dealing with a monopoly, it is often better to have a regulator than have nothing.

For many areas, it would be really nice to have two broadband ISPs. It would be a big thing for the local small businesses, startups, and business development organizations. Think about the marketing possibilities for small municipalities if servers could be set up almost anywhere.

Currently, well connected servers can only go into a series of third-party datacenters located at the few specific geographical locations where 2+ carries both have fiber interconnects. There are places in the world where I can set up a home server, on two 1 GB pipes from two different providers for about $200/month. Or I can be in the majority of the U.S., and have only one choice of provider and with some slow package.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...