Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Never AT&T (Score 2) 34

I remember when AT&T took over my @Home cable modem service. The prices went way up and the service got really really bad. Back when I had @Home I had 10Mbps down and 1Mbps up (originally 10M up and down). Back then that was still pretty insane. Then AT&T took it over and it became ATTBI. AT&T decided that 1Mbps was too much bandwidth and lowered it to 128Kbps up. On top of that, they aggregated EVERYONE's bandwidth through the same 128Kbps, so now I'm sharing 128Kbps up along with all of my neighbors. At the best of times with ping I only got 40% packet loss. Needless to say, dial-up was a lot faster than my "broadband". It was like this for 9 months. AT&T support consisted of "did you reboot your computer and router and modem?" which, of course, did absolutely nothing. AT&T eventually fixed it, but even newspaper articles describing their crappy service didn't change matters.

Finally Comcast took it over and Comcast was a godsend compared to AT&T. You know things are bad when you praise Comcast. Even Comcast's crappy customer service is orders of magnitude better than what I experienced with AT&T.

I will NEVER use AT&T again. I currently use Comcast business, which, while expensive, is much better than residential.

Comment Re:Just Remember, Folks. (Score 2) 169

I have a 4 year old Tesla model S with 50,000 miles on it. I have not noticed any loss of range or performance. The general consensus among owners is that there is a 5% loss of range at 100K miles. Now Tesla has a much better battery thermal management system and a better chemistry than a number of other manufacturers (i.e. Nissan, GM, etc.)

Degradation turns out to be around 23 miles per 100,000 miles driven.

Here's an excellent talk by one of the foremost experts on lithium ion battery degradation.

Comment Re:Just Remember, Folks. (Score 1) 169

Tesla is NOT using the same battery tech as everyone else. Here's a good talk about them.. Dr. Jeff Dahn is one of the foremost experts on battery failure.

Tesla also has active battery thermal management using liquid cooling. The Nissan Leaf, by comparison, does not. Tesla's chemistry is also a lot better than the chemistry used in the Leaf.

Comment Re:Just Remember, Folks. (Score 1) 169

At 100,000 miles the consensus is that an 85KWh battery still has 95% of its capacity. The batteries are also designed for automotive use with active heating and cooling as necessary. Laptop batteries are not designed for longevity, nor do they have active temperature control. Laptop batteries (and other consumer device batteries) are usually designed primarily with cost and capacity in mind. As long as they last as long as the warranty that's all they care about. Furthermore, laptops don't treat their batteries very well. Lithium ion batteries really hate being fully charged for any length of time and being fully discharged. When I researched the batteries used in my Tesla model S it showed them having over 70% capacity after 3000 full charge/discharge cycles. Assuming 200 miles per charge (actually it's a fair amount more) that works out to 600,000 miles.

Generally one doesn't charge the car to 100% or drive it down to 0% either. The car lets you choose how much you want to charge it and warns you if you choose over 90%. I am just shy of 50,000 miles and I haven't noticed a drop in range or performance in the four years I've had my car.

Comment Re:Yeah, with a fucking asterisk (Score 4, Informative) 169

Things are rather different for airplanes than cars. Airplanes are far more complex in terms of what can go wrong, however for the most part autopilot is simpler. My model S already checks the tire pressure. If the car is iced up it likely won't drive itself either, since obviously all the cameras need to work as well as the ultrasonic sensors and radar.

The reliability of a car also does not need to match that of an airplane since with a car you can usually just pull over. With an airplane carrying a lot of passengers it's a whole different story. It's not like it can just pull over and stop at 30,000 feet.

Mechanically they're night and day. The number of moving parts in a Tesla's drivetrain is a small fraction of what it is in a gasoline or hybrid vehicle which is a lot simpler than an airplane. The car already monitors just about everything as it is such as battery temperature, current/voltage, coolant temperature, air temperature, tire pressure, traction control, stability control, etc. There's even a rain/snow sensor. The autopilot feature also won't work if the car can't see the road clearly and it's not supported if it's raining or snowing. The car even monitors the state of the 12v battery. In my model S it warned me before it failed, and my car is a first generation model.

The car also is paying attention to a lot more than a driver can, since with 8 cameras and other sensors it is constantly looking all around the vehicle. It doesn't get distracted either by kids in the back seat, changing radio stations, or cell phones. The software will continue to improve as time goes on.

Comment Re: Umm (Score 4, Informative) 396

You cite an anecdotal article which does not apply to where most people are complaining about voter ID. Many elderly people, for example, lack an ID. In some rural areas it is also difficult to get an ID since the DMV is often a significant distance away and is open for a limited number of hours, often during working hours. Harlem has a very different makeup than rural areas and access to an ID is far easier.

Here is a better article. About 11% of Americans do not have government issued photo identification cards. A federal court in Texas found that 608,740 registered voters didn't have the forms of identification required for voting.

The amount of voter fraud in the United States is exceedingly low so the whole voter ID laws are a solution in search of a problem. Out of 1 billion votes cast there were 44 cases of fraud, a rate of 0.0000044%.

There is also widespread evidence that such laws are designed to target democratic voters and that they tend to target the poor and minorities.

Comment Re: Learn C for advanced security, not for basics (Score 1) 374

Many of these IoT devices have small amounts of memory and do not run Linux. Even those that do run Linux, understanding C is still a must when you deal with hardware. I'm sorry for you, but the embedded world is based almost entirely on C, not Rust or any other language. Learning C for embedded work is essential. You'll get a lot further knowing C than knowing Rust or even Python (which is making some inroads). The most valuable jobs basically all require C.

Comment Re: Learn C for advanced security, not for basics (Score 1) 374

I agree with you about the overhead. The C++ overhead is generally minimal which I always find I have a hard time convincing people of. Back in the day all of my debugging was at the assembly level because there was no source level debugger for kernel device drivers.

Write barriers also guarantee that the data is actually written to memory. On many platforms (i.e. MIPS) when you write to memory with a store instruction it may not actually be written to the cache but sit in a write combining buffer. Also there are different sync instructions where some preserve the order and others don't. One has to explicitly issue a sync instruction in order to flush the write buffer to the L2 cache. If the processor is not cache coherent (like a number of processors I've used) then you also need to explicitly flush or invalidate the cache as needed. Of course all of this also depends on where the write is. Writes to I/O registers typically don't need the synchronization.

Comment Re:Artificial language limits (Score 1) 374

Although forth require BIOS. Try writing forth when there is no forth interpreter or any BIOS. By 8K, that includes everything. I'd like to see a 64-bit 4th program compete. I highly doubt it would. In my case there is no BIOS since this bootloader is the very first thing that executes once the chip exits reset.

Comment Re:Arduino uses C++, Pi uses Linux (Score 1) 374

Hardware registers can be defined by volatile bitfields so a simple pointer can be used to access them.

Unless, of course, you have multiple registers with ordering constraints between them (e.g. write some data into one register, toggle a flag in another), because the volatile keyword in C does not give any guarantees of ordering between accesses to different volatile objects and the compiler is completely free to reorder the write to the flag before the write to the data.

That is what barriers are for to guarantee the ordering or the use of accessor functions. I'm not saying that volatile is a cure-all since it is not. It is a tool that needs to be understood. On many architectures one also needs to understand that a write to memory may not actually write to memory. For example, on MIPS a SYNC instruction is needed to flush the write buffer to guarantee that the data has been written to cache memory. On processors that are not cache coherent a cache flush is also required for data structures stored in memory that are accessed via DMA by hardware and a cache invalidate is needed before the hardware updates the memory before the CPU can read it.

Things like interrupt handlers are fairly trivial to code in C.

As long as someone else is writing the assembly code that preserves state for the interrupted context, prevents the interrupt handler from being interrupted, and so on, and then calls into the C code. And with those constraints, pretty much any compiled language is equivalent.

In C the context is usually fairly small compared to many other languages.

One can do interesting things using the linker with C code that are not really possible with most other languages. For example, I can easily link my code to execute at a particular address and generate a binary without any elf headers or any other cruft and there are interesting things that can be done with linker scripts

That's pretty much true for any compiled language.

Not really. Many compiled languages need a lot of support to handle things like memory management which C does not.

There is no unexpected overhead due to the language. There is no background garbage collection that can run at some inopportune time. There's no extra code to do bounds or pointer checking to slow down the code or even get in the way.

The flip side of this is that you either do the bounds checking yourself (and often get it wrong, leading to security vulnerabilities) and end up passing it through a compiler that isn't designed to aggressively elide bounds checks that it can prove are not needed.

Languages that do bounds checking and other hand holding generally don't work well on bare metal and in low-level (i.e. device drivers) environments. If you're relying on the language to catch your bugs then you have no business writing code in this sort of an environment. In this sort of environment a pointer often needs to be a pointer with no hand holding or the compiler or runtime trying to second guess the programmer. Compilers and languages do not understand hardware.

Generally it is pretty easy to move between different versions of the toolchain. C generally doesn't change much.

I can't tell from the Internet. Did you actually say that with a straight face? Even moving between different versions of the same C toolchain can cause C programs that depend on undefined or implementation-defined behaviour (i.e. basically all C programs - including yours given that several of the things that you listed in another post in this thread as really nice features of C are undefined behaviour, such as casting from a long to a pointer) are now optimised into something that doesn't do what the programmer intended.

Yes I did and one can, within reason, generally not have issues moving between different versions of a compiler. In C, generally a long is the size of a pointer in most ABIs, and I write code that deals with switching between 32 and 64-bit ABIs. If you're talking about 16-bit or Microsoft then things are often pretty screwed up, but generally speaking on 32-bit and 64-bit platforms the size of a long follows the size of a pointer (except in Windows where in environments like LLP64 a long is 32-bits, but then who uses Windows for IoT).

I have seen far fewer changes with C regarding different toolchain versions than I have other languages (i.e. C++). I have over 25 years of experience working in this type of environment. My experience with changing toolchain versions with C has generally been pretty painless compared to my experience with, say, C++. On my C++ project we couldn't even move the toolchain by even a minor revision because all hell broke loose and there was no way to work around it.

I've been writing bare-metal software since the 1980s before I was in high school, done Linux kernel porting, device drivers for a variety of operating systems and worked on many embedded systems. I've written PC BIOS, device drivers for Linux, MS DOS, OS/2, Solaris, VxWorks and U-Boot. I've ported the Linux kernel to an unsupported Powerquicc processor and worked on numerous Linux device drivers. I also work with a number of custom environments and have written numerous bare-metal bootloaders from scratch, and by bootloader I'm not talking GRUB since in these cases there is no BIOS and DRAM is not initialized. My code is the first thing that runs straight from the reset vector. I also work with a lot of embedded multi-core chips (up to 96 cores) spread across two chips in a NUMA architecture. I've worked in drivers for everything from PCI, I2C, memory initialization (DDR2/DDR3/DDR4), USB, SATA, SPI, high-speed networking, everything from ATM (Asynchronous Transfer Mode) 25M/155M/622M/2.4G, Ethernet 100Mbps, 1G, 2.5G, 5G, 10G, 25G and 40G, Wifi, high speed serial and other technologies as well as FPGAs and networking phy chips (try dealing with analog issues at 26GHz!). I've worked with X86, Sparc, MIPS, ARM, PowerPC, Arduino and some other exotic architectures. I also deal with things like handlers for things like ECC errors in memory, cache issues, high-speed SerDes and a lot of hardware issues. At any given time I have at least two hardware schematics open (the smallest being around 40 pages) as well as numerous hardware reference manuals which are often in the thousands of pages.

I also was responsible for the data path and a good chunk of the control plane of a complex router which ran multiple instances Cisco IOS for the control plane where my code performed all packet manipulation for every sort of interface imaginable (including a lot of DSL), MPLS and many other protocols. So yes, I can with a straight face say this.

Comment Re:Meh (Score 1) 374

My definition of clean is that you don't have to completely re-write assembly code when moving from 32-bit to 64-bit. On many other architectures, i.e. AArch64/X86_64 you do.

I'm not saying that MIPS is perfect. I'm not saying that there's anything wrong with AARCH64, but that it is radically different than 32-bit ARM, and I agree, the changes were very badly needed for the reasons you specify. In no way can you say that the transition from ARM32 to AARCH64 is clean, however. The code needs to be completely rewritten.

As for GCC extensions, to the best of my knowledge all of the Cavium MIPS extensions have been upstreamed and have been for quite some time. Unlike most MIPS vendors we actually have a sizeable compiler and toolchain team which has also been very active with AARCH64.

As for bitfield manipulation instructions they are there. Look up the ins (insert), ext (extract), bbit1 (branch if bit 1) and bbit0 (branch if bit 0) instructions. I use them extensively in my code. I know AARCH64 has similar instructions that are even more powerful.

Cavium actually had to teach ARM about atomic instructions, hence their addition in ARMv8.1 because the load linked/store conditional instructions don't scale. As far as I know we're still trying to get ARM to add transactional memory support (something we support in our MIPS chips) which is better than atomics on pairs. A big reason why they have the atomic instructions was because Cavium pushed hard for them because of our experience with chips with a large number of cores and scaling.

Also, the branch likely instructions had been deprecated for a very long time. Patching a binary is actually pretty trivial since the instructions just need to be replaced with the non-likely equivalent.

As for your hi/lo registers, that was done because the result exceeds what could be held in a single register. There's always the mul/dmul instructions introduced in MIPSv6 which does not use the hi/lo registers when you don't care about overflowing the target register. The hi/lo come mostly from the 32-bit days in order to handle a 64-bit result and the effect it had on the pipeline.

Slashdot Top Deals

"If value corrupts then absolute value corrupts absolutely."

Working...