Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:I just call them Web Designers (Score 1) 586

Everywhere outside the USA (especially Canada) you can't call yourself an Engineer unless you have an Engineering degree :)

(that doesn't mean you can't get a job with the title "engineer", it's just a limitation on self-description).

Usually the best thing to do is rephrase Engineer to Developer or Programmer or Analyst..

Comment Re:GCC compatibility (Score 3, Interesting) 173

I find it hard to believe that the Linux kernel developers never heard of ICC. Or, to take another example, never used Codewarrior or XL C (IBM's PPC compiler, especially good for POWER5 and Cell) or DIAB (or Wind River Compiler or whatever they call it now). Or even Visual C++. Personally I've had the pleasure of using them all.. they all do things differently, but when you have a development team which is using more than one.. I once worked on a team where most of the developers had DIAB, but they didn't want to pay for licenses for EVERYONE, so it was just for the team leaders and release engineering guys, so we all got GCC instead. We had to be mindful not to break the release builds.. and the work ethic meant everything went pretty much fine all round.

All of them have at one time or still today produce much better code and have much better profiling than GCC and are used a lot in industry. If the commercial compiler doesn't do what you want or is too expensive, GCC is your fallback. Linux turns this on it's head because it "wants" to use as much free, GNU software, but I don't think the development process should be so inhibited as to ignore other compilers - especially considering they are generally always far better optimized for an architecture.

As a side note, it's well known that gcc 2.95.3 generates much better code on a lot of platforms, but some apps out there are refusing to compile with gcc 2.x (I'm looking at rtorrent here.. mainly because it's C++ and gcc 2.x C++ support sucks. This is another reason why commercial compilers are still popular :) and some only build with other versions of gcc, patches flying around to make sure it builds with the vast majority, significant amounts of development time is already "wasted" on compiler differences even on the SAME compiler, so putting ICC or XCC support in there shouldn't be too much of a chore, especially since they are broadly GCC compatible anyway.

Like the article said, most of the problem, and the reason they have the wrapper, is to nuke certain gcc-specific and arch-specific arguments to the compiler, and the internal code is mostly making sure Linux has those differences implemented. There is a decent white-paper on it here. The notes about ICC being stricter in syntax checking are enlightening. If you write some really slack code, ICC will balk. GCC will happily chug along generating whatever code it likes. It's probably better all round (and might even improve code quality generated by GCC, note the quote about GCC "occasionally" doing the "right" thing when certain keywords are missing) if Linux developers are mindful of these warnings, but as I've said somewhere in this thread, Linux developers need some serious convincing on moving away from GCC (I've even heard a few say "well, you should fix GCC instead", rather than take a patch to fix their code to work in ICC)

Comment Re:GCC compatibility (Score 1) 173

There's no reason you can't build your code to support all the tools you could possibly use to their fullest capacity, though. No reason at all. Except when one tool doesn't do something that the other does that you find important.

I very much doubt any C compiler shipping these days misses the features required to build the kernel, but the kernel developers only care about adding in GCC options and GCC pragmas and attributes.. in spite of those who would prefer to use some other compiler.

Comment Re:GCC compatibility (Score 1) 173

None but you should think about the hurdles of porting it to a non-POSIX operating system like AmigaOS (yes they did..) and MorphOS (which is like AmigaOS but the GCC port supports a bunch of craaazy extra options) and OMG think of the children!!!!!!!

Both of those had to rely on a special portability library (newlib port in the first instance, and the ancient "ixemul" library in the second instance) to get it to work, notwithstanding the actual platform features and ABI support.

Maybe they're not noteworthy but there's plenty of scope for a non-POSIX operating system in the embedded space, where having a custom compiler is a part daily life. What about when you're supporting a new architecture which isn't in mainline GCC for instance, using CodeSourcery patches for a while to enable custom processor features?

Comment Re:GCC compatibility (Score 4, Informative) 173

There isn't one, so what you do is use pragmas (I remember #pragma pack(1)) or attributes (__attribute__((packed)) or something similar.

Of course they're compiler-specific but there's no reason that code can't be written wrapped in defines or typedefs to stop compiler-specific stuff getting into real production code nested 10 directories down in a codebase with 40,000,000 lines.

Linux does an okay job of this - but since coders usually reference the compiler manual to use these esoteric pragmas and types, they are usually told "this is specific to GCC" (GCC does a good job of this anyway) so they should be wrapping them by default to help their application be portable and maintainable to future compilers (especially if they change the attribute name or the way it works - as has been done on many a GCC, let alone other compilers).

What usually nukes it (and why linux-dna has a compiler wrapper) is because they're hardcoding options and doing other weird GCC-specific crap. This is not because they are lazy but because the Linux kernel has a "we use GCC so support that, who gives a crap about other compilers?" development policy and it usually takes some convincing - or a fork, as linux-dna is - to get these patches into mainline.

Comment Re:GCC compatibility (Score 4, Insightful) 173

:)

I think the point is that ICC has been made "gcc compatible" in certain areas by defining a lot of pre-baked defines, and accepting a lot of gcc arguments.

In the end, though, autoconf/automake and cmake and even a hand-coded Makefile could easily abstract the differences between compilers so that -mno-sse2 is used on gcc and --no-simd-instructions=sse2 on some esoteric (non-existent, I made that up) compiler. I used to have a couple of projects which happily ran on BSD or GNU userland (BSD make, GNU make, jot vs. seq, gcc vs. icc vs. amiga sas/c :) and all built fairly usable code from the same script automatically depending on the target platform.

The over-reliance of the Linux kernel and it's hardcoded options for GCC means you have to port GCC to your platform first, before you can use a compiler which may already be written by/for your CPU vendor (a good example was always Codewarrior.. but that's defunct now)

Of course there is always configure script abuse; just like you can't build MPlayer for a system with less features than the one you're on without specifying 30-40 hand-added options to force everything back down.

A lot of it comes down to laziness - using what you have and not considering that other people may have different tools. And of course the usual Unix philosophy that while you may never need something, it should be installed anyway just because an app CAN use it (I can imagine using a photo application for JPEGs alone, but they will still pull in every image library using the dynamic linker, at load time.. and all these plugins will be spread across by disk)

Comment Re:Why pretend these are ordinary disks? (Score 1) 207

Seems you didn't read the article either, or the parent. I was discussing the reason why SSD manufacturers aren't using special MTD drivers anymore, and the reason they don't is because wear-levelling generally gets done in the MTD driver if there is a simplistic flash controller behind it (although you could do it in the controller, that makes the

The real problem is Linux uses CHS values (fake as they may be) in the block layer. Everywhere. And the partitioning tools do. And the RAID tools do by proxy.

ext4 has absolutely no idea what the "natural" alignment of the disk blocks and how they fit in a "cylinder" is because there's no decent way to find out based on CHS values which are fixed up and hardcoded inside the block layer.

You can find out how big a "physical" block is (512, 2048, 2352, 4096..), but 100% of available SSDs return 512 and a bunch of fake CHS all for compatibility's sake. CHS just doesn't work anymore and the compatibility means more fake CHS values are being implemented and dropped on top of the LBA addressing scheme.

If Linux or any other OS had any way of finding out where the natural alignment stood then regardless of where the partition is created (thus moving the problem away from some userspace tools which all get updated independently) then the filesystem can be created to take advantage of that alignment.

If the partition is created through "fake" CHS values then performance would suffer if the filesystem isn't aligned. This is the problem right now, filesystems assume that the start of the partition is naturally "cylinder" aligned. With a 128k erase block and CHS "cylinder" alignment with a 4k block size on the filesystem you could be pretty far away from well-aligned. If it knew that it had to align it's data structures then it doesn't have to make assumptions about the partitioning scheme. Let's be honest; there are more partitioning schemes than MBR. What about GPT or RDB? BSD slices?

http://www.ipnom.com/FreeBSD-Man-Pages/fdisk.8.html

I love this little snippet;

If you hand craft your disk layout, please make sure that the FreeBSD slice starts on a cylinder boundary. A number of decisions made later may assume this. (This might not be necessary later.)

So, it may be necessary or not. BSD slices are not naturally aligned - as defined - on cylinder boundaries, they use sector size only. Some tool such as fdisk tries to handle this for you. But the values the disk and the kernel pass back are just not realistic (255 heads, 63 cylinders...) and do not reflect ANY disk.

Since you can't change the 255/63 value passed in by the disk or hardcoded in the block layer, plus cylinders and heads make zero sense on a flash drive (or a ramdisk or a virtualized block layer) why not a new ATA command set which reports the true natural alignment of the disk, with reasonable values which can be used to optimize performance, well away from the compatibility values, that the filesystem can get (as it gets the sector size) and rely on for the best performing filesystem on that media?

Comment Re:The cameras do nothing (Score 1) 311

No, don't bitch about cameras and its invasion of privacy. You missed the point entirely WHILE agreeing with it.

Bitch about the unaccountable government and law enforcement agencies and lobby for regulation to control the access and use of data.

Cameras don't hurt anybody, just like taking a photo doesn't actually steal your soul.

Comment Re:The cameras do nothing (Score 2, Insightful) 311

Public access only works if all the public watching aren't nutcases.

Just imagine what public access surveillance would do to the "stalking industry", or people who prey on others (even such stuff as seeing who got a hell of a lot of money out of an ATM, or had a nice shiny car and is busy getting his eyes tested).

It's probably best not to throw the entire thing out to the public.

But it does basically throw up the accountability issue; the data and the people behind the data and using the data need to be regulated and accountable. The public is not regulated OR accountable.

The problem with cameras right now is that cameras are AWESOME, but you got some lazy fat donut-munching wanker behind the desk with the little joystick, zooming in on some pair of tits instead of watching and acting on the mugging going on down the street. Or worse, a lazy fat donut-munching wanker who is taking bribes to "lose" footage when it's inconvenient.

Slashdot Top Deals

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...