The 'z' for mainframe means 'zero downtime'.
The 'z' for mainframe means 'zero downtime'.
Wait, you think that when you use your credit card, or do an online banking transaction, or make an airline reservation, or make an online purchase, or make an insurance claim, they 'batch it up and run it on a weekend at the end of the month'?
The AC is just completely wrong in everything he said. Of course mainframes have protected memory - they have had it since 1964.
As an aside, IBM announced their 2012 earnings last month. In it, they stated that mainframe revenue grew 56% over last year, and they shipped more mainframe capacity last year than ever before in their history. Now, the interesting thing about that is that over HALF of that capacity was in the form of 'new workload' engines, which means Linux and Java. The oft-repeated slashdot meme that mainframes are legacy hardware running only legacy programs is a myth.
Great if you want your tools to drive your business and not the other way around.
Java the language in not insecure, and nobody (that knows what they are talking about) recommended disabling it (how do you even go about disabling a language?) The problem you linked to was not a problem with Java the language, or even Java virtual machine. It was a problem in a particular implementation of a JVM (namely, Oracles implementation). And even there the only problem was that code did not get sandboxed as well as it should. And even then a problem only occurs if you are running untrusted code. Which is why what DHS suggested was actually 'do not run untrusted code', in other words, disable the browser plugin.
So, if you in am enterprise/mainframe type situation (like this story is about) you don't have a problem. First and foremost, you won't be browsing the web. If you did somehow manage to browse the web, there is no way that untrusted code would ever get into your production environment. And lastly, you are probably running IBMs JRE, not Oracles.
As for ever changing libraries, etc, that is why you deal with a vendor you trust, who understands concepts like not breaking existing code, ever. There is a reason why all those ancient CoBOL programs are still running.
Um, no. I don't know where you got that crap, but it is entirely false. IBM itself currently sells COBOL, FORTRAN, C/C++, PL/1, and REXX compilers for z/OS. In addition, you can use gcc, etc.
And your line about IBM mainframes not having protected memory is complete bullshit. They have had protected memory since 1964. It is not an option.
And NO user space programs affect system integrity, no matter how they are written or what language they are written in. That is one of the major selling points of z/OS.
You know that a bit is an object that can be counted (like every other object in the world) in any base at all, right? You know that a bit represents the number 1 and nothing more, right?
And while we're on your stupid assertion that things that can represented must be counted in binary, please explain why everybody in the world refers to 64KB, 3Gb, etc. Nothing like mixing two different bases in the same number to really confuse things (or maybe you think 64 and 3 are binary numbers). And why are file sizes displayed in base 10? Or does simply abusing the well-known prefixes 'kilo', 'mega', etc mean that we magically switch to some base 2 system of counting (but only for what the prefix represents, not the number itself)?
I assume you would like to look less like an idiot in the future, so I will provide information with references for your education.
"There is no such thing as a half bit"
In communications, a half bit is a signal that is on the wire for half of the time of a full bit. Here is a datasheet from a UART manufacturer. On page 4 they describe the 'line control register' which sets how many stop bits there are: 1, 1.5, or 2. A simple search will return many references to start/stop bits in async communications.
Your little quote you posted provides no support for your position at all. Nobody ever said maximum numbers (such as data lengths) were not going to be in powers of two, or that calculations such as CRC would not be in powers of two. What I said was that data is not naturally (or even usually) transmitted in power of two increments, and you have shown absolutely nothing to disprove that.
'Early networking involved powers of two'. Really? 'Early networking' would be dialup, right? So what were the common speeds - 110, 300, 1200, 2400, 4800, 9600, 14400, 28800, 33600, 56600. Yep, powers of two all. Ethernet - nope, no powers of two there either. How about token ring? Nope, no powers of two there.
The only thing you said that is correct is that it makes sense for virtual pages to be powers of two. However, virtual pages have nothing to do with disk, other than disk is used for paging space. And of course the first systems to have virtual memory (back when it was really critical that things be as efficient as possible) didn't use disks with 512 byte sectors.
You have posted more stupidly incorrect information in this forum than I have ever seen before.
Packet sizes are even powers of 2? Since when? Going back to async days, a 'packet' consisted of 1, 1.5, or two start bits, 5-8 data bits, 0 or 1 parity bit, and 1 or two stop bits. So the 'smallest packet would be 7 bits, and the 'largest' packet would be 13 bits. Where are your even powers of 2?
Oh, you didn't mean async, you meant something more modern, like Ethernet, right? OK, so what is the most common ethernet packet size? 1500 bytes. Yup, nice even power of two. Oh, maybe you meant token ring. Hmm token ring packets must be anywhere between 4 and 4051 bytes long. Yup, nice even powers of two.
Yes, you can. Of course the signing key is not part of the standard, obviously. So you would need your own signing key, and until you can prove that your TPM is functioning correctly nobody needs to trust your key.
No, his point was that he, like you, are too stupid to know that what a thing represents has nothing to do with how you count it. Or do you count $5 bills in base 5?
I don't think I have ever read so much incorrect information in one place before. Congratulations! By the way, I have over thirty years experience doing hardware design and assembly programming.
First, to your addressing question. I don't know if you are talking about segment-register type addressing, or bank-select type addressing, but in either case you are completely wrong. In degment register addressing, the processor performs the calculation of merging your 16-bit 'address' with the current segment, and drives the addressing lines accordingly. In bank-select addressing you pre-select a 'bank' of memory, and the addressing lines from the processor select the appropriate location within the bank. In either case, if you can address 4G then you have 32 addressing lines, either all directly from the processor, or perhaps with some coming from an external bank-select register.
I already said it does not matter what the unit is being selected (bit, byte, word, line, whatever). The addressing does not change based on the size of the data, on the number of data lines changes.
WTF does something being measured in bits-per-second have to do with powers of two? Not a damn thing. Bits can REPRESENT powers of two, but they do not OCCUR in powers of two. If you can't understand that distinction you are really even more clueless than I thought. In memory components, bits/bytes/whatever OCCUR in powers of two. You can't buy a 1000 (not a power of two) byte memory chip, you can buy a 1024 byte (a power of two) chip. You can't buy a 3072 byte (not a power of two) chip (well maybe there is some weird chip like that but it would be special purpose), the next highest size is 4096 (a power of two). However, you certainly CAN send exactly 1000 or 3072, or any other number of bytes across a network. There is absolutely no power-of-two boundary involved there.
Likewise, the size of a harddisk is dependant only on the bit density of the medium. A disk can be manufactured in absolutely any size at all, there is NO 'natural' power of two boundary to disk sizes.
Lastly, grouping bits into 'bytes' or 'words' has nothing to do with powers of two, it has to do with MULTIPLES of bits. There is no reason that the 'word size' of a machine has to be a power of two. IBM mainframes use 24 (not a power of two) and 31 (not a power of two) bit addresses. There have been 6, 10, 12, and 18 bit 'words' in the past. None of those are powers of two. The only thing grouping into bytes does is say that you will always transfer or store a multiple of 8 bits, which has nothing at all to do with powers of two.
Yes, which is why I said it makes no sense at all. Well, you were the one who seemed to be advocating counting by the value represented, not me. Or was there a point to your 'a byte is 2 to the power of 8' statement that I missed?
OK, I re-read your post, and it still doesn't make sense. A byte is certainly not '2 to the power of 8' bits, it is 8 bits. The biggest VALUE that can be represented in a byte is 2^8 -1, but who cares about that? Surely you are not suggesting that we measure memory size, network speeed, disk size, etc by the biggest VALUE that can be represented, are you? Because if you are, then a 100Gbps network should, according to you, have a 'speed' of 2^1000000000ps. That makes absolutely no sense at all.
Counting in binary is just like counting in decimal -- if you are all thumbs. -- Glaser and Way