Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re: Windows (Score 1) 224

I recall recently, from ESR's stuff hackers used to know article:

That property is still useful, and thus in 2017 the AT convention has survived in some interesting places. AT commands have been found to perform control functions on 3G and 4G cellular modems used in smartphones. On one widely deployed variety, "AT+QLINUXCMD=" is a prefix that passes commands to an instance of Linux running in firmware on the chip itself (separately from whatever OS might be running visibly on the phone).

As well as in TVs (e.g. my Bravia), synth workstations, etc. Once you take an interest in how stuff works, it is astonishing how ubiquitous Linux has become. Some point to the 'dont make money in those areas' arguments, but forget history and the consequences when cheap commodity and consumer goods become good enough in an area which was previously the preserve of high end equipment.

Comment Something that has to happen: (Score 2) 251

On Linux, something I find very annoying with apt-get is that everything goes into a single /usr hierarchy, rather than having multiple ones and overlaying. Right now, it is a hack at best to do stuff like this. But serious thought, on all OSs, needs to be given to the following:

The point is to make the core of the OS read-only at runtime, preferably read-only at a hardware level (that is, install the OS on a small SSD which even the kernel cannot write to during normal running, and which delegates what configuration settings can be overridden from the writable portion of the files).

Essentially the 'principle of least privilege' is something that OS designers need to give far more serious thought to, and also what privileges are actually needed during normal runtime. Updating the core OS should be done from a 'secondary OS' whose only purpose is updating the core OS, and is restricted in its nature so as to only be able to do this. (The ideal place for this is in PC firmware, where one should use the firmware to install the base OS, and once booted, the base OS is effectively immutable.)

(Yes, this is basically a coarse capability-based security system, partially enforced in hardware, in a way which leaves users in control.)

Comment Re: Software should have copyright - and nothing m (Score 1) 104

Software parents are akin to copyrighting e.g. stories using certain plot devices. Imagine if one author patented revealing the killer at the start, another revealing him at the end, another patenting revealing the doer-of-crime two thirds of the way in. Certainly abolishing software patents would not inhibit progress in software development, and without them more progress would be made.

Comment Original controller, hardware hacking, arduinos... (Score 1) 262

If you take an original controller, open it up, and solder in a few wires, and connect the other end to an arduino contraption, you can pretty much send whatever information to the console you like via an 'original controller'. Now people learning how to make stuff, modify stuff, and so on, is _way_ more important than console gaming. One has the capacity to allow people to solve interesting problems, the other is a recreation.

Comment Easy Peasy (Score 1) 309

For each natural number define a binary operation M_i such that for all x,y we have x M_i y = i. Basically an infinite family of trivial constant functions. Then for each i we have 4 M_i 4 + 4 - 4 = i. If you're allowed arbitrary operations it is this trivially easy. What is interesting is the interplay between what operations you are allowed, and what results are possible. Now if you work in reverse polish notation (like Forth), you write things as 4 4 4 4 A B C, where A, B and C are your choices of binary operations. If you have a choice of N binary operations, naturally you can produce at most N^3 distinct results. So really what you are studying is the function from the power set (set of all subsets) of the set of all binary operations on numbers (for some notion of number, e.g. real, complex, surreal, etc.), to the power set of the numbers (for some notion of number).

Comment Granular permissions (Score 1) 229

Something Android does, or tries to do at least, is to have a granular permissions system for apps. Chrome should do similar for websites, where by default those things capable of causing problems are switched off. For sites that genuinely make good use of Bluetooth (and where the user is happy with this), it should be easy enough to grant permissions. In addition, when it comes to granting permissions, there is the opportunity to add information, and to hide/detect more dangerous choices.

Comment Laughable (Score 1) 328

It is laughable that people talk of it being an 'either/or' thing. In the modern world, people need a grasp of foreign languages, since people need to talk to people; people need a grasp of programming, so that computers are not so much 'magic black boxes with flashing lights'; and people need to grasp the languages of maths and science. Figuring out how to teach people, and get across why grokking these things is a good idea, is a research project nobody at the top of the education seems to want to take fully take on.

Comment And for those of us who don't want toys... (Score 1) 171

When are they gonna produce straightforward machines that run Windows software well, don't get in the friggin way, have an app launcher that doesn't have ten tons of stupid AI in it, where a simple user-configurable menu (with a simple search facility) suffices, and so on. Like Apple, they are chasing the smartphone shiny consumer market and near-abandoning everybody else.

Comment From the 'why not earlier' department... (Score 2) 267

For years, there was a shift towards avoiding expensive coprocessors and related by having more and more work done by the CPU. The massive growth in single core speeds in e.g. Intel chips made this sensible. Now that single core speeds are not getting faster, and we are having to go multi-core, and now that power consumption is becoming more of an issue, rethinking is becoming more pertinent. Way back when, mainframes would have things like I/O done by independent hardware subsystems, to avoid using expensive time on the main CPUs, and now it seems this is being rediscovered.

Firstly, especially in something like MacOS, there has been progress towards offloading more and more of Quartz to the GPU. Many GUI things could quite happily be handled by a low-power ARM chip on the GPU itself. Already with programmable shaders, and now Vulkan, we are getting to the place where, for graphics, things are accomplished by sending programs, request and data buffers over a high speed interconnect (usually the PCIe bus). To some degree, network transparent graphics are being reinvented, though here the 'network' is the PCIe bus, rather than 10baseT. Having something like an ARM core, with a few specialised bits, for most drawing operations, and having much of the windowing and drawing existing largely at the GPU end of the bus, is one step towards are more efficient architecture: for most of what your PC does, using an Intel Core for it is overkill and wasteful of power. Getting to a point where the main CPUs can be switched off when idling will save a lot of power. In addition, one can look to mainframe architecture of old for inspiration.

Another part of that inspiration is to do similar with I/O. Moving mounting/unmounting and filesystems off to another subsystem run by a small ARM (or similar) core, makes a lot of sense. To the main CPU you have the appearance of a programmable DMA system, to which you merely need to send requests. The small I/O core doing this could be little different to the kind of few-dollar chip SoC we find in cheap smartphones. Moreover, it does not need the capacity for running arbitrary software (not should it have: since its job is more limited, it is more straightforward to lock it down).

This puts you at a point where, especially if you do the 'big-core/little-core' thing with the GPU architecture itself, the system can start up to the point where there is a useable GUI and command line interface before the 'main processors' have even booted up. Essentially you have something a bit like a Chromebook with the traditional 'Central Processing Unit' becoming a coprocessor for handling user tasks.

I'd also go so far to suggest that moving what are traditionally the kernel's duties 'out-of-band', namely on a multi-core CPU, have a small RISC core handling kernel duties, and so far as hyperthreading is concerned, having this 'out of band kernel' able to save/load state from the inactive thread on a hyperthreading core. (Essentially if you have a 2-thread core, the chip then has a state-cache for these threads, where it can move them, and from there save/load thread state to main memory: importantly, much of the CPU overhead for a context switch is removed.)

Slashdot Top Deals