Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re: Umm (Score 3, Insightful) 362

You cite an anecdotal article which does not apply to where most people are complaining about voter ID. Many elderly people, for example, lack an ID. In some rural areas it is also difficult to get an ID since the DMV is often a significant distance away and is open for a limited number of hours, often during working hours. Harlem has a very different makeup than rural areas and access to an ID is far easier.

Here is a better article. About 11% of Americans do not have government issued photo identification cards. A federal court in Texas found that 608,740 registered voters didn't have the forms of identification required for voting.

The amount of voter fraud in the United States is exceedingly low so the whole voter ID laws are a solution in search of a problem. Out of 1 billion votes cast there were 44 cases of fraud, a rate of 0.0000044%.

There is also widespread evidence that such laws are designed to target democratic voters and that they tend to target the poor and minorities.

Comment Re: Learn C for advanced security, not for basics (Score 1) 374

Many of these IoT devices have small amounts of memory and do not run Linux. Even those that do run Linux, understanding C is still a must when you deal with hardware. I'm sorry for you, but the embedded world is based almost entirely on C, not Rust or any other language. Learning C for embedded work is essential. You'll get a lot further knowing C than knowing Rust or even Python (which is making some inroads). The most valuable jobs basically all require C.

Comment Re: Learn C for advanced security, not for basics (Score 1) 374

I agree with you about the overhead. The C++ overhead is generally minimal which I always find I have a hard time convincing people of. Back in the day all of my debugging was at the assembly level because there was no source level debugger for kernel device drivers.

Write barriers also guarantee that the data is actually written to memory. On many platforms (i.e. MIPS) when you write to memory with a store instruction it may not actually be written to the cache but sit in a write combining buffer. Also there are different sync instructions where some preserve the order and others don't. One has to explicitly issue a sync instruction in order to flush the write buffer to the L2 cache. If the processor is not cache coherent (like a number of processors I've used) then you also need to explicitly flush or invalidate the cache as needed. Of course all of this also depends on where the write is. Writes to I/O registers typically don't need the synchronization.

Comment Re:Artificial language limits (Score 1) 374

Although forth require BIOS. Try writing forth when there is no forth interpreter or any BIOS. By 8K, that includes everything. I'd like to see a 64-bit 4th program compete. I highly doubt it would. In my case there is no BIOS since this bootloader is the very first thing that executes once the chip exits reset.

Comment Re:Arduino uses C++, Pi uses Linux (Score 1) 374

Hardware registers can be defined by volatile bitfields so a simple pointer can be used to access them.

Unless, of course, you have multiple registers with ordering constraints between them (e.g. write some data into one register, toggle a flag in another), because the volatile keyword in C does not give any guarantees of ordering between accesses to different volatile objects and the compiler is completely free to reorder the write to the flag before the write to the data.

That is what barriers are for to guarantee the ordering or the use of accessor functions. I'm not saying that volatile is a cure-all since it is not. It is a tool that needs to be understood. On many architectures one also needs to understand that a write to memory may not actually write to memory. For example, on MIPS a SYNC instruction is needed to flush the write buffer to guarantee that the data has been written to cache memory. On processors that are not cache coherent a cache flush is also required for data structures stored in memory that are accessed via DMA by hardware and a cache invalidate is needed before the hardware updates the memory before the CPU can read it.

Things like interrupt handlers are fairly trivial to code in C.

As long as someone else is writing the assembly code that preserves state for the interrupted context, prevents the interrupt handler from being interrupted, and so on, and then calls into the C code. And with those constraints, pretty much any compiled language is equivalent.

In C the context is usually fairly small compared to many other languages.

One can do interesting things using the linker with C code that are not really possible with most other languages. For example, I can easily link my code to execute at a particular address and generate a binary without any elf headers or any other cruft and there are interesting things that can be done with linker scripts

That's pretty much true for any compiled language.

Not really. Many compiled languages need a lot of support to handle things like memory management which C does not.

There is no unexpected overhead due to the language. There is no background garbage collection that can run at some inopportune time. There's no extra code to do bounds or pointer checking to slow down the code or even get in the way.

The flip side of this is that you either do the bounds checking yourself (and often get it wrong, leading to security vulnerabilities) and end up passing it through a compiler that isn't designed to aggressively elide bounds checks that it can prove are not needed.

Languages that do bounds checking and other hand holding generally don't work well on bare metal and in low-level (i.e. device drivers) environments. If you're relying on the language to catch your bugs then you have no business writing code in this sort of an environment. In this sort of environment a pointer often needs to be a pointer with no hand holding or the compiler or runtime trying to second guess the programmer. Compilers and languages do not understand hardware.

Generally it is pretty easy to move between different versions of the toolchain. C generally doesn't change much.

I can't tell from the Internet. Did you actually say that with a straight face? Even moving between different versions of the same C toolchain can cause C programs that depend on undefined or implementation-defined behaviour (i.e. basically all C programs - including yours given that several of the things that you listed in another post in this thread as really nice features of C are undefined behaviour, such as casting from a long to a pointer) are now optimised into something that doesn't do what the programmer intended.

Yes I did and one can, within reason, generally not have issues moving between different versions of a compiler. In C, generally a long is the size of a pointer in most ABIs, and I write code that deals with switching between 32 and 64-bit ABIs. If you're talking about 16-bit or Microsoft then things are often pretty screwed up, but generally speaking on 32-bit and 64-bit platforms the size of a long follows the size of a pointer (except in Windows where in environments like LLP64 a long is 32-bits, but then who uses Windows for IoT).

I have seen far fewer changes with C regarding different toolchain versions than I have other languages (i.e. C++). I have over 25 years of experience working in this type of environment. My experience with changing toolchain versions with C has generally been pretty painless compared to my experience with, say, C++. On my C++ project we couldn't even move the toolchain by even a minor revision because all hell broke loose and there was no way to work around it.

I've been writing bare-metal software since the 1980s before I was in high school, done Linux kernel porting, device drivers for a variety of operating systems and worked on many embedded systems. I've written PC BIOS, device drivers for Linux, MS DOS, OS/2, Solaris, VxWorks and U-Boot. I've ported the Linux kernel to an unsupported Powerquicc processor and worked on numerous Linux device drivers. I also work with a number of custom environments and have written numerous bare-metal bootloaders from scratch, and by bootloader I'm not talking GRUB since in these cases there is no BIOS and DRAM is not initialized. My code is the first thing that runs straight from the reset vector. I also work with a lot of embedded multi-core chips (up to 96 cores) spread across two chips in a NUMA architecture. I've worked in drivers for everything from PCI, I2C, memory initialization (DDR2/DDR3/DDR4), USB, SATA, SPI, high-speed networking, everything from ATM (Asynchronous Transfer Mode) 25M/155M/622M/2.4G, Ethernet 100Mbps, 1G, 2.5G, 5G, 10G, 25G and 40G, Wifi, high speed serial and other technologies as well as FPGAs and networking phy chips (try dealing with analog issues at 26GHz!). I've worked with X86, Sparc, MIPS, ARM, PowerPC, Arduino and some other exotic architectures. I also deal with things like handlers for things like ECC errors in memory, cache issues, high-speed SerDes and a lot of hardware issues. At any given time I have at least two hardware schematics open (the smallest being around 40 pages) as well as numerous hardware reference manuals which are often in the thousands of pages.

I also was responsible for the data path and a good chunk of the control plane of a complex router which ran multiple instances Cisco IOS for the control plane where my code performed all packet manipulation for every sort of interface imaginable (including a lot of DSL), MPLS and many other protocols. So yes, I can with a straight face say this.

Comment Re:Meh (Score 1) 374

My definition of clean is that you don't have to completely re-write assembly code when moving from 32-bit to 64-bit. On many other architectures, i.e. AArch64/X86_64 you do.

I'm not saying that MIPS is perfect. I'm not saying that there's anything wrong with AARCH64, but that it is radically different than 32-bit ARM, and I agree, the changes were very badly needed for the reasons you specify. In no way can you say that the transition from ARM32 to AARCH64 is clean, however. The code needs to be completely rewritten.

As for GCC extensions, to the best of my knowledge all of the Cavium MIPS extensions have been upstreamed and have been for quite some time. Unlike most MIPS vendors we actually have a sizeable compiler and toolchain team which has also been very active with AARCH64.

As for bitfield manipulation instructions they are there. Look up the ins (insert), ext (extract), bbit1 (branch if bit 1) and bbit0 (branch if bit 0) instructions. I use them extensively in my code. I know AARCH64 has similar instructions that are even more powerful.

Cavium actually had to teach ARM about atomic instructions, hence their addition in ARMv8.1 because the load linked/store conditional instructions don't scale. As far as I know we're still trying to get ARM to add transactional memory support (something we support in our MIPS chips) which is better than atomics on pairs. A big reason why they have the atomic instructions was because Cavium pushed hard for them because of our experience with chips with a large number of cores and scaling.

Also, the branch likely instructions had been deprecated for a very long time. Patching a binary is actually pretty trivial since the instructions just need to be replaced with the non-likely equivalent.

As for your hi/lo registers, that was done because the result exceeds what could be held in a single register. There's always the mul/dmul instructions introduced in MIPSv6 which does not use the hi/lo registers when you don't care about overflowing the target register. The hi/lo come mostly from the 32-bit days in order to handle a 64-bit result and the effect it had on the pipeline.

Comment Re:Arduino uses C++, Pi uses Linux (Score 1) 374

In this particular case it was taking advantage of polymorphism in the main data path of a network driver (ATM networking, LAN emulation). The inheritance tree was quite short and kept to a minimum. This was because it could emulate different types of networks. Overall I don't think there was any additional overhead from C++ vs the same functionality written in C which would have needed function pointers instead of virtual functions.

Comment Definitely yes (Score 2) 374

While most of my work is for chips that are vastly more powerful than what is found in IoT devices I work on bootloaders and bare-metal programming. In some cases memory is a premium. With only a couple of exceptions, all of the work I have done has always been with C. Most small micros are programmed in C almost exclusively with a sprinkling of assembly.

C is very good for working closely with the hardware and in memory constrained environments. C code does exactly what it says. There is no hidden overhead. The runtime needed to run C code is pretty minimal. All it really needs to get going is often a few registers initialized and a stack and it's ready to go.

It works beautifully with memory mapped registers and data structures which are extremely common in this environment. There's even a keyword designed for this, volatile, that is not present in most other languages (or it does not do the same thing).

I can use a bitfield to define hardware registers and just map a pointer to the address of that register and use it. Mixing in assembly code is easy if it's needed, though generally I find that assembly code isn't needed very often.

It's also easy to generate a binary blob using C, a linker script and a tool like objcopy.

My experience has mostly been with 64-bit MIPS and some ARMV8 but it applies to most embedded processors. The output of the C compiler when optimizing for size (with the right options set) is pretty close to hand optimized assembly and often even better because the compiler does things that would make the code otherwise hard to read or maintain. The runtime overhead of C is minimal.

C's flexibility with pointers is also a huge plus. I can easily convert between a pointer and an unsigned long (or unsigned long long) and back as needed, or typecast to a different data type and pointer arithmetic is trivial. There is no hidden bounds checking or pointer checking to worry about. Many people say that's a bad thing, but when you're working close to the metal it can really turn into a major pain in the you know what. Master pointers and know how and when to typecast them. I've seen too many times where people screw up pointer arithmetic. I once spent several weeks tracking down a bug that showed up in one of my large data structures where I saw corruption. It turned out that some totally unrelated code written by somebody else in a totally different module was written by someone who didn't understand pointer arithmetic and was spewing data all over the place other than where it should be. He also didn't realize that when you get data from a TCP stream you don't always get the amount of data you ask for, it could be less.

I have been writing low-level device drivers and bootloaders for over 20 years and while programming has changed significantly for more powerful systems in userspace, for low level programming it has changed very little. My advice is to learn C and learn it well. Know what some of those keywords mean like static and volatile. Anyone who interviews in my group damned well better know what volatile means and how to use it.

C isn't a very complicated language but it takes time to properly master. It also doesn't hold your hand like many modern languages, which is why you often hear it is easy to have things like buffer overflows or stack overflows and it's also easy to shoot yourself in the foot. It doesn't have many of the modern conveniences, but those conveniences often come at a cost in terms of memory and performance.

The best book I've seen on the language was written by the authors of the language. It's not very long but it is concise and well written.

The C Programming Language by Brian W. Kernighan and Dennis M. Richie.

I have also worked on C++ device drivers. While the overhead of C++ itself is generally minimal, it depends that you use only a subset of C++ and you have to know what C++ is doing behind the scenes in many cases. Linking C++ to C can be a challenge sometimes and linking it to assembly code can be even more challenging.

In the embedded world, especially when dealing with networking, C is king. Drivers are almost always written in C as are operating system kernels. While C++ can also be used it often isn't. Part of that is perception from past problems with using C++. C++ tends to not be as well supported. Arduino is C++ though it is using only a small subset of the language.

Comment Re:Reason to learn C++ (Score 2) 374

He is absolutely correct. There are some aspects where the two languages diverge. The way you program in C tends to be very different compared to the way C++ code is written. I have worked with both extensively for low-level projects (i.e. bare metal device drivers).

Comment Re:Better options (Score 1) 374


Rust handles things like low-level bare metal poorly. What happens when you don't have a nice operating system underneath you? What happens when you're dealing with bitfields and memory mapped hardware registers? C is very well designed for this, and even has a keyword, volatile, for just such occurrences. Rust hand-holding becomes a major hinderance and gets in the way. C is very efficient with minimal language overhead. I can have C code running from the reset vector with only a page or two of assembly code on the 64-bit MIPS processors I deal with, and that includes stack. The output from the compiler with the right optimization features rivals hand tuned assembly and often beats it in terms of space requirements. C doesn't require any external libraries. I've written plenty of bare metal bootloaders in C with minimal assembly that do an incredible amount of stuff with very limited memory.

I have a bootloader that includes drivers for SD cards, embedded MMC (managed flash memory chips frequently found on embedded devices, phones and tablets), FAT filesystem support (FAT16 and FAT32), partitioning support and serial drivers in under 8K, and this is for a 64-bit 48-core network processor running in 64-bit mode that happens to run Linux.

There are a lot of things C can do that Rust cannot, and in many cases, the safety and hand holding of Rust just gets in the way. I need pointers to specific memory addresses, for example, and I don't want bounds checking or thread support.

Try writing an interrupt handler in Rust. Good luck with that.

Comment Re:Meh (Score 1) 374

I love MIPS assembly. I actually have a page of MIPS assembly open at this very moment. In terms of programming, it's beautifully designed. The extension from 32-bit to 64-bit is pretty seamless. The only real gotchas are the branch delay slots, but I actually find it to be a nice challenge to fill those slots. (I always have .set noreorder set).

MIPS assembly is extremely clean, far cleaner than ARM, for example, and the online documentation is superb. I like the MIPS Architecture for Programmers Volume II. Volumes 1 and 3 are also useful. I have yet to find as clean of a document covering ARMv8.

Even the MIPS instruction encoding is quite clean and the CPU design makes it easy for vendors to add their own interesting instructions to coprocessor 2. For example, my employer has a bunch of encryption and hashing related instructions added there. ARM does not allow you to add your own custom instructions to ARMv8, for example.

MIPS is still used in many Internet appliances and home routers, though things are quickly moving to ARM. One advantage of the MIPS instruction is it very cleanly moved from 32-bits to 64 bits. With ARM that is not the case. AARCH64 is very different than 32-bit ARM. The differences are bigger IMO than the switch from X86 to X86_64. Another advantage of MIPS is that the licensing costs are quite a bit lower than for ARM.

Comment Re:Artificial language limits (Score 2) 374

Actually most modern languages fall apart since you need a certain feature set at the low level that high-level languages try and protect you from. Many languages have significant overhead required just to use the language and thus don't work well in low memory situations.

I have written a number of bootloaders that have to fit in 8K of RAM. There is absolutely no way I could write them in anything other than C and a minimal amount of assembly. C code can be very efficient with modern compilers optimized for size. I'd put them close to carefully coded assembly. Often the C code is smaller than assembly because the compiler will make choices that the assembly programmer will avoid just for readability and maintainability.

Most higher level languages do hand holding with things like pointers and memory management which get in the way and consume a lot of resources. C is very good for low level bare metal programming. All that pointer checking and other features have a cost in terms of size and performance.

Slashdot Top Deals

We will have solar energy as soon as the utility companies solve one technical problem -- how to run a sunbeam through a meter.