Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re: Umm (Score 3, Insightful) 376

You cite an anecdotal article which does not apply to where most people are complaining about voter ID. Many elderly people, for example, lack an ID. In some rural areas it is also difficult to get an ID since the DMV is often a significant distance away and is open for a limited number of hours, often during working hours. Harlem has a very different makeup than rural areas and access to an ID is far easier.

Here is a better article. About 11% of Americans do not have government issued photo identification cards. A federal court in Texas found that 608,740 registered voters didn't have the forms of identification required for voting.

The amount of voter fraud in the United States is exceedingly low so the whole voter ID laws are a solution in search of a problem. Out of 1 billion votes cast there were 44 cases of fraud, a rate of 0.0000044%.

There is also widespread evidence that such laws are designed to target democratic voters and that they tend to target the poor and minorities.

Comment Re:Only a penny a page, duplex? (Score 1) 5

I'll be better able to figure it when the cartridge is empty. The savings come from not having to pay eight or ten bucks for copies that I'm proofreading.

They're already online as free e-books, HTML, and PDF, with printed copies available at a price.

Comment Cataracts and Suse (Score 1) 6

IIRC you're Canadian (if in the US you'll need insurance) and should be able to get CrystaLens implants for an extra $2,000. They cure nearsightedness, farsightedness, astigmatism, and cataracts.

I ran Suse back in 2003 and liked it, but moved to Mandrake because my TV didn;t like it; I was using the TV as a monitor with an S-video cable. Still trying to find a distro that will run on an old Gateway laptop.

Comment Re: Whythe vaguness about the age? (Score 3, Interesting) 109

Yet you absolutely refuse to even consider anything religious.

That's profound ignorance, or an outright lie. E.g. there are several published experiments regarding the efficacy of prayer (summary: praying doesn't help). At least one of the studies was funded by the alleged pro-religious Templeton Foundation.

There seems to be some confusion in superstitious circles about what "keeping an open mind" actually means. It does not mean "accept anything you're told without evidence," or "accept anything you're told unless you can prove the opposite." It does mean "be open to evaluating new evidence when presented with it."

In other words, present your evidence for your religious claims. If the evidence holds up to scrutiny, the claim will be accepted. To my knowledge, this has yet to happen.

Comment Re:Only a penny a page, duplex? (Score 1) 5

I based the estimate on $5o for a cartridge that prints an average of 3,000 pages. A color laser would be nice, but as you say, far more expensive both in up-front costs and toner. And changing toner in a color printer is a PITA, at least the ones at work were.

Comment Re: Learn C for advanced security, not for basics (Score 1) 374

Many of these IoT devices have small amounts of memory and do not run Linux. Even those that do run Linux, understanding C is still a must when you deal with hardware. I'm sorry for you, but the embedded world is based almost entirely on C, not Rust or any other language. Learning C for embedded work is essential. You'll get a lot further knowing C than knowing Rust or even Python (which is making some inroads). The most valuable jobs basically all require C.

Comment Re: Learn C for advanced security, not for basics (Score 1) 374

I agree with you about the overhead. The C++ overhead is generally minimal which I always find I have a hard time convincing people of. Back in the day all of my debugging was at the assembly level because there was no source level debugger for kernel device drivers.

Write barriers also guarantee that the data is actually written to memory. On many platforms (i.e. MIPS) when you write to memory with a store instruction it may not actually be written to the cache but sit in a write combining buffer. Also there are different sync instructions where some preserve the order and others don't. One has to explicitly issue a sync instruction in order to flush the write buffer to the L2 cache. If the processor is not cache coherent (like a number of processors I've used) then you also need to explicitly flush or invalidate the cache as needed. Of course all of this also depends on where the write is. Writes to I/O registers typically don't need the synchronization.

Comment Re:Artificial language limits (Score 1) 374

Although forth require BIOS. Try writing forth when there is no forth interpreter or any BIOS. By 8K, that includes everything. I'd like to see a 64-bit 4th program compete. I highly doubt it would. In my case there is no BIOS since this bootloader is the very first thing that executes once the chip exits reset.

Comment Re:Arduino uses C++, Pi uses Linux (Score 1) 374

Hardware registers can be defined by volatile bitfields so a simple pointer can be used to access them.

Unless, of course, you have multiple registers with ordering constraints between them (e.g. write some data into one register, toggle a flag in another), because the volatile keyword in C does not give any guarantees of ordering between accesses to different volatile objects and the compiler is completely free to reorder the write to the flag before the write to the data.

That is what barriers are for to guarantee the ordering or the use of accessor functions. I'm not saying that volatile is a cure-all since it is not. It is a tool that needs to be understood. On many architectures one also needs to understand that a write to memory may not actually write to memory. For example, on MIPS a SYNC instruction is needed to flush the write buffer to guarantee that the data has been written to cache memory. On processors that are not cache coherent a cache flush is also required for data structures stored in memory that are accessed via DMA by hardware and a cache invalidate is needed before the hardware updates the memory before the CPU can read it.

Things like interrupt handlers are fairly trivial to code in C.

As long as someone else is writing the assembly code that preserves state for the interrupted context, prevents the interrupt handler from being interrupted, and so on, and then calls into the C code. And with those constraints, pretty much any compiled language is equivalent.

In C the context is usually fairly small compared to many other languages.

One can do interesting things using the linker with C code that are not really possible with most other languages. For example, I can easily link my code to execute at a particular address and generate a binary without any elf headers or any other cruft and there are interesting things that can be done with linker scripts

That's pretty much true for any compiled language.

Not really. Many compiled languages need a lot of support to handle things like memory management which C does not.

There is no unexpected overhead due to the language. There is no background garbage collection that can run at some inopportune time. There's no extra code to do bounds or pointer checking to slow down the code or even get in the way.

The flip side of this is that you either do the bounds checking yourself (and often get it wrong, leading to security vulnerabilities) and end up passing it through a compiler that isn't designed to aggressively elide bounds checks that it can prove are not needed.

Languages that do bounds checking and other hand holding generally don't work well on bare metal and in low-level (i.e. device drivers) environments. If you're relying on the language to catch your bugs then you have no business writing code in this sort of an environment. In this sort of environment a pointer often needs to be a pointer with no hand holding or the compiler or runtime trying to second guess the programmer. Compilers and languages do not understand hardware.

Generally it is pretty easy to move between different versions of the toolchain. C generally doesn't change much.

I can't tell from the Internet. Did you actually say that with a straight face? Even moving between different versions of the same C toolchain can cause C programs that depend on undefined or implementation-defined behaviour (i.e. basically all C programs - including yours given that several of the things that you listed in another post in this thread as really nice features of C are undefined behaviour, such as casting from a long to a pointer) are now optimised into something that doesn't do what the programmer intended.

Yes I did and one can, within reason, generally not have issues moving between different versions of a compiler. In C, generally a long is the size of a pointer in most ABIs, and I write code that deals with switching between 32 and 64-bit ABIs. If you're talking about 16-bit or Microsoft then things are often pretty screwed up, but generally speaking on 32-bit and 64-bit platforms the size of a long follows the size of a pointer (except in Windows where in environments like LLP64 a long is 32-bits, but then who uses Windows for IoT).

I have seen far fewer changes with C regarding different toolchain versions than I have other languages (i.e. C++). I have over 25 years of experience working in this type of environment. My experience with changing toolchain versions with C has generally been pretty painless compared to my experience with, say, C++. On my C++ project we couldn't even move the toolchain by even a minor revision because all hell broke loose and there was no way to work around it.

I've been writing bare-metal software since the 1980s before I was in high school, done Linux kernel porting, device drivers for a variety of operating systems and worked on many embedded systems. I've written PC BIOS, device drivers for Linux, MS DOS, OS/2, Solaris, VxWorks and U-Boot. I've ported the Linux kernel to an unsupported Powerquicc processor and worked on numerous Linux device drivers. I also work with a number of custom environments and have written numerous bare-metal bootloaders from scratch, and by bootloader I'm not talking GRUB since in these cases there is no BIOS and DRAM is not initialized. My code is the first thing that runs straight from the reset vector. I also work with a lot of embedded multi-core chips (up to 96 cores) spread across two chips in a NUMA architecture. I've worked in drivers for everything from PCI, I2C, memory initialization (DDR2/DDR3/DDR4), USB, SATA, SPI, high-speed networking, everything from ATM (Asynchronous Transfer Mode) 25M/155M/622M/2.4G, Ethernet 100Mbps, 1G, 2.5G, 5G, 10G, 25G and 40G, Wifi, high speed serial and other technologies as well as FPGAs and networking phy chips (try dealing with analog issues at 26GHz!). I've worked with X86, Sparc, MIPS, ARM, PowerPC, Arduino and some other exotic architectures. I also deal with things like handlers for things like ECC errors in memory, cache issues, high-speed SerDes and a lot of hardware issues. At any given time I have at least two hardware schematics open (the smallest being around 40 pages) as well as numerous hardware reference manuals which are often in the thousands of pages.

I also was responsible for the data path and a good chunk of the control plane of a complex router which ran multiple instances Cisco IOS for the control plane where my code performed all packet manipulation for every sort of interface imaginable (including a lot of DSL), MPLS and many other protocols. So yes, I can with a straight face say this.

Comment Re:Meh (Score 1) 374

My definition of clean is that you don't have to completely re-write assembly code when moving from 32-bit to 64-bit. On many other architectures, i.e. AArch64/X86_64 you do.

I'm not saying that MIPS is perfect. I'm not saying that there's anything wrong with AARCH64, but that it is radically different than 32-bit ARM, and I agree, the changes were very badly needed for the reasons you specify. In no way can you say that the transition from ARM32 to AARCH64 is clean, however. The code needs to be completely rewritten.

As for GCC extensions, to the best of my knowledge all of the Cavium MIPS extensions have been upstreamed and have been for quite some time. Unlike most MIPS vendors we actually have a sizeable compiler and toolchain team which has also been very active with AARCH64.

As for bitfield manipulation instructions they are there. Look up the ins (insert), ext (extract), bbit1 (branch if bit 1) and bbit0 (branch if bit 0) instructions. I use them extensively in my code. I know AARCH64 has similar instructions that are even more powerful.

Cavium actually had to teach ARM about atomic instructions, hence their addition in ARMv8.1 because the load linked/store conditional instructions don't scale. As far as I know we're still trying to get ARM to add transactional memory support (something we support in our MIPS chips) which is better than atomics on pairs. A big reason why they have the atomic instructions was because Cavium pushed hard for them because of our experience with chips with a large number of cores and scaling.

Also, the branch likely instructions had been deprecated for a very long time. Patching a binary is actually pretty trivial since the instructions just need to be replaced with the non-likely equivalent.

As for your hi/lo registers, that was done because the result exceeds what could be held in a single register. There's always the mul/dmul instructions introduced in MIPSv6 which does not use the hi/lo registers when you don't care about overflowing the target register. The hi/lo come mostly from the 32-bit days in order to handle a 64-bit result and the effect it had on the pipeline.

Slashdot Top Deals

Competence, like truth, beauty, and contact lenses, is in the eye of the beholder. -- Dr. Laurence J. Peter

Working...