Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:"Killer apps" (Score 1) 606

1.8.3 has issues with certain paths when not in git bash, and git bash isn't exactly well integrated (compared to Unix). To get git bash-style features replicating the sh script that comes with it (like giving you current status in the prompt), you need to use third party add-ons, and if you're on a large repo, they suck. They're also incomplete.

If you do use git bash, then you have to deal with forward slashes (copy pasting from other source is fun!).

And if you have a large repo, its impossibly slow. TortoiseGit has a lot of caching issues too, and if you set it to NOT have caching issues, then again, its super slow.

Obviously, on small projects none of this is really a problem, aside the shell integration. But none of it is a problem on a real unix.

Comment "Killer apps" (Score 1) 606

I feel like I have to turn in my geek card for using that terminology, but its still the best way to describe it. In the end, the culture, the environment, it all doesn't matter as long as you can do what you want to do in the best way.

For programming/CS, its pretty easy. Lately, all the best stuff is coming out for Unix, which, for almost a decade, was debatable. Some people liked developing under Unix, but not everyone. Say what you will, but VB6 drove a lot back in its days, and .NET has a significant mind share. For a long time, SVN's client TortoiseSVN was what a lot of people used.

Now, with SVN blowing up in everyone's face, and with Github becoming a de facto standard, many are turning to git. And git, while it has a few interesting UIs, simply blows on Windows. I got it to work the way I liked, but it took a lot of Powershell black magic and reverse engineering some of the Unix tools, and even then its significantly slower than it is on Unix for extremely large projects (it shouldn't be used that way, you should split your repos in smaller ones, but when making the transition from SVN its not always possible).

Then you have tools like Grunt/Node that are becoming standard...they work GREAT on Windows. They work better on Unix. Even though I'm in a Windows shop, half of the devs work on Macs to have the Unix ecosystem without having any issues integrating in the existing Windows environment.

So that will take care of developers all in due time on its own. For normal users? Thats trickier. But it will always be about the apps...and now that everyone does everything in a browser, making people care about whats behind will always be a tricky one.

Comment Re:Short answer: no (Score 1) 400

Seems we are talking about two different things.
The file limits.h always was compiler/vendor or at least CPU specific.

You misunderstood me. What I quoted was the wording of the ISO C Standard which sets the requirements for what the values of limits.h must be, and not any particular header. Of course, any specific implementation can have different values in that header, but the Standard requires them to be at least as large as the numbers that I've quoted.

Regarding C++ standards, very long they never where followed (at least usually not complete, especially not on windows), so it was always wise to actually read the .h files or the manual from the vendor.

I'm actually not aware of any C++ implementation that would not follow the requirements I've quoted. C++ inherited it from C since the very first version of the ISO standard back in 1998, and compilers have generally been agreeable with it largely because they have themselves usually evolved from C compilers, and have already implemented that part of the spec.

However it is nice to know that finally some basic agreements on sizes (how helpfull they might be is still a question imho) where made. I mean, the latest C code I had to read was still full with ugly #define u_int32 unigned int and other stuff like this. I wonder why they never simply agree on types/typenames like int16 / int32 when every serious programm is full with either typedefed or #define'd helpers like that. Ofc, it would limit the vendors abilities to create a compiler for very odd word sizes, like 9 or 11 bits, but how likely is it that such architectures will come up again?

It has been dealt with in C99 and in C++11. We now have a new header, stdint.h, which defines typedefs like int32_t and uint16_t.

They did, in fact, account for odd word sizes, as well. The header actually defines several families of types, like so:

int8_t, int16_t, int32_t, int64_t, ...
int_least8_t, ...
int_fast8_t, ...

Now, types from the intN_t family are optional - an implementation is not required to provide them. However, it must provide a specific typedef if it does have an integer type with the exact matching number of bits. So if e.g. long is 32-bit, then it must provide int32_t. Or, for some exotic architecture with, say, a 24-bit word, if int corresponds to that word, then it must provide int24_t.

int_leastN_t and int_fastN_t, in contrast, are required to be always available for 8, 16, 32 and 64 bits (and optionally available for any other random N). As the names imply, those aren't actually guaranteed to have N representation bits - int_leastN_t is the smallest integer type with at least that many bits, and int_fastN_t is the fastest integer type with that many bits (usually, this is the native word if N is smaller than word size).

There's also [u]intmax_t, which is guaranteed to be able to hold any value of any other integer type provided, and intptr_t, which is guaranteed to be castable to a pointer and back without losing any data. And then there are a bunch of macros to specify literals of those types, to obtain max/min values for them etc - which in case of intN_t types can also be used to query for their existence with an #ifdef. Of course, most people don't bother and just assume that those are always available, which is true on all but the most exotic platforms (like the aforementioned SHARC, where the smallest addressable memory unit is the 32-bit word, and hence it cannot provide int8_t or int16_t efficiently).

gcc had all these for ages. VC++ was lagging quite a bit (since it was only following the C++ spec, and kept C at C89 level) - it only got the header in VS 2010. But for the last 4 years, it is possible to use all this to write code that is fully portable across all major platforms and compilers.

Comment Re:Short answer: no (Score 1) 400

It has also been defined since C89. In particular, it required the following minimum values of constants defined in limits.h:

CHAR_BITS: 8
SCHAR_MIN: -127
SCHAR_MAX: 127
UCHAR_MAX: 255
SHRT_MIN: -32767
SHRT_MAX: 32767
USHRT_MAX: 65535
LONG_MIN: -2147483647
LONG_MAX: 2147483647
ULONG_MAX: 4294967295

Since those define the range of the corresponding types, and the spec requires representation to be binary, they effectively define the minimum number of bits. The same header requires int to be at least as long as short.

Note that for signed types, the min and max values are the same modulo sign, and exclude the minimum value in two's complement - e.g. signed char has a minimum permitted range of -127..127, not -128..127. That's because they wanted one's complement and sign bit to be valid representations for integers, as well (this is made explicit in C99, where those three formats are enumerated as the only valid ones).

Comment Re:Also that pricing is misleading (Score 1) 501

Well let's see, what do we use at work... Cadence. No. HFSS. No. Hyper-V. No. ADS. No.

Hmmm... Maybe not so much. There's plenty of shit that doesn't use OpenCL, but where you want speed and big memory, but there's little to no GPU use.

Also you find a lot of stuff that does GPU acceleration, wants CUDA. The research labs we have that do GPU based work are all NVidia all the time on account of CUDA.

Comment Re:Something something online sorting (Score 1) 241

Hell, even the hard drives are gaming, or are making their way there. SCSI was the only way to go, even though SATA overtook the performance long ago.
Even old U320 SCSI drives have seek times ca. 2/3 those (and consequently higher IOPS) of the fastest SATA drives.

Then they started putting 2.5" SAS drives in, which are laptop SATA drives with a bigger pricetag.
You are utterly clueless.

You've only got to hold an enterprise SAS drive and a consumer SATA laptop drive in each hand to know they have to be manufactured differently. Then again, you've probably never actually seen an enterprise SAS drive, let alone held one.

This is before even starting to look at the different specifications - where can I buy a 15k RPM laptop SATA drive ? How am I going to get multiple paths and multiple controllers accessing the disk when SATA doesn't support such a thing ?

The rest of your post is equally misinformed rubbish. I don't know who you build "servers" for, but I pity them. There's a difference between being able to assemble decent server-grade hardware on a budget that precludes big-name vendors, and not understanding what server-grade hardware (or the philosophy behind it) actually is, and you are clearly the latter.

Comment Re:Short answer: no (Score 1) 400

You can derive some other requirements from LONG_MAX etc. Char has to be at least 8 value bits (and for unsigned, it cannot have padding), but it can be larger. Short must have at least 16 value bits, and long must have at least 32. And then there is the sizeof equation that you mentioned - char is implicitly a part of it because sizeof reports size in chars.

Comment Re:Short answer: no (Score 1) 400

You're wrong here. Compilers can and do use a great deal of leeway when optimizing around unspecified behavior. For example, g++ often assumes that "this" is never null, and also that two pointers of different types cannot alias. While I'm not aware of any compiler that reorders fields, it's not because there is some expectation on behalf of the programmers there - who in their sane mind would depend on fields being ordered across visibility specifiers, and why?

Comment Re:Short answer: no (Score 1) 400

The same clause is present in the C89 standard. It may be something that was true in very early compilers before standardization.

With respect to char and byte, it's pretty much required to be one and the same (really, char is the fundamental unit in the spec - everything else is measured in chars). But it does not have to be an 8-bit byte. There are architectures out there, like SHARC, where sizeof(char)==sizeof(short)==sizeof(int)==1, but value range is 2^16 for all of those.

Comment Re:Good (Score 1) 236

I root for the team that provides sustainable wealth creation and jobs.
Then you shouldn't be rooting for team corporatism, which has for the last 30-odd years been creating a system of completely unsustainable wealth creation and jobs.

The period of most "sustainable wealth creation and jobs" in human history, was the few decades post-WW2, up until the late '70s when the neoliberals took over the western world.

Comment What if I want data integrity? (Score 1) 501

Say, RAID-6? That's what you do for drive failures. The problem with drive failure isn't replacing the drive, but the data and the downtime.

With most workstations, this is easy, you can get a RAID controller, usually integrated on the board (Dell's PERC 710s are great) and you can knock in a bunch of drives and go. High performance, high resilience. No such luck on this new Pro.

Another option would be a good external system. Maybe a heavy hitting iSCSI or FC array. That's where you go for really high end, lots of storage, reliability, etc. Ahh well you are kinda screwed there too. No cards to add FC to the pro, and OS-X has no iSCSI initiator, which is shocking for a modern OS, Windows got it in 2003 and Linux in 2005.

Also you might want to look in to SSD failure rates. They aren't particularly high, but they aren't particularly low either. Oh, and they are workload dependent as well. I loves me some SSDs, but don't think they are rocks on which you can build your house.

Comment ...and if I have no need for that? (Score 1) 501

This is the thing all Mac fans seem to miss: Apple often throws in expensive shit that people don't need, and would rather not pay for. You discover that with SSDs, they are pretty much all "fast enough" for most tasks, meaning they are not a significant bottleneck, if one at all. You can see this upgrading a SATA 2 SSD to SATA 3. You get twice the bandwidth, and benchmarks bear that out, but you notice no operational difference. It was already fast enough for what it is tasked with.

Even high end stuff in nearly all cases. Like streaming audio samples. SSDs are the best shit EVAR as far as those of us that play with audio samplers (NI Kontakt and the like ) are concerned. What you find is that all limits go away with regards to the drive. Want to stream 2000 voices at once? No problem, even "slow" SSDs are fast enough for that no problems.

So the "givashit" quotient on these hyper-fast SSDs is pretty low. If I was running a heavy hitting database maybe. Of course one wouldn't do that on a Mac Pro. For AV work? Nope, regular SSDs are fast enough and space is more of an issue than speed. You can do uncompressed 4:4:4 HD video on any SSD no problem. However you need 13GB/minute to hold it. So 1200MB/sec doesn't matter 225MB/sec is all you need and a SATA-2 SSD could do that. What you need is space for cheap. A comparatively slow 1TB SSD is more use than a lightning fast 250GB one.

Slashdot Top Deals

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...