Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Journal Journal: ARM yourself for an x86 free future. 2

The x86 architecture holds an even more dominant position in the hardware market than Microsoft does in the Software market. It's not quite the same, INTEL do not have an x86 monopoly and other producers can and do produce x86 compatible alternatives.

But at the end of the day the x86 is coming to the end of it's useful life. Allthougth numerous improvements have been made over the years it still clings on to a lot of legacy concepts, and innovations must work thier way round difficult obstacles if they are to upgrade, as opposed to replace, the x86 architecture. At a certain point one has to wipe the slate clean.

Before you jump to conclusions, this article is not going to talk about PowerPC's, Macs or 64 bit architectures. Sure they are relevant to x86 succession, but for the most part they relate to servers and specialized workstations. I want to talk about what most of us are likely to be using for our personal computing requirements in the future, and those needs will not be met by box on the corner of our desks but by our mobile phones, our pad computers, and the computers built into our home entertainment centres, car dasboards and the backs of airplane seats. And I here I think the future is going to be ARM.

So have I sold my shares in INTEL and purchased stocks in the company that makes ARM? No. Because they are the same. Or, to be more precise, ARM processors are manufactured by most of the major microprocessor manufacturers (including those who just serve embedded markets), and yet the architecture is owned by non of them. It is just one of several quirks that set the ARM processor, or more correctly the ARM architecture, apart from more 'conventional' processors. So who is this 'new kid on the block', and where does he come from? Well, the first lesson to learn is that he is by no means a new kid, in fact to find the roots of the ARM processor we must jump back 20 years or so.

Remember the days of the Commondore PET, the VIC 20, the TRS80 and the Sinclair ZX? The first generation of computers with keyboards and a text display capability that ordinary folks could purchase. They were to spark the home computing revolution, and they also marked the introduction of computers into high schools. In the UK the government wanted to introduce computing to the school curriculum, and did a deal with the BBC (state TV company) to produce a series of courses and programs. In order to overcome the incompatibility problems that existed between all these little BASIC in ROM type computers, they made thier own outline spec and put out an invitation to manufacturers to compete for this standard school computing platform.

The winner was Acorn computers, and thier 6502 based design could be described as being roughly similar to an Apple II. It cut down in some areas (the expansion bus was not as good), but it did have some nice features such as a few general purpose analog and digital I/O lines built in for doing classroom experiments. For many years this was the standard computer in UK schools and colleges. In addition the low price meant that many home users owned them as well. There was nothing snazzy about the BBC computer, or beeb as it was affectionately known, it was a simple and efficient and reliable solution. Allthougth it was more expensive than competitors such as the Sinclar spectrum and Commondore 64, it was more robust and had a usable keyboard. In short, it was less of a games console and more of a computer that could be used on a daily basis, albeit for very simple tasks.

The sensible no-frills but reliable and efficient design approach did not win Acorn many friends, and many questioned why they had one the contract in the first place. On a feature per buck basis the beeb did look very poor. But the design stood the test of time, and so it was that with computing classes and user requirements continually rising Acorn had the problem of coming up with the all new improved 16-bit version.

This placed them in a bit of an enviable position with respect to many microcomputer manufacturers. They had an established and stable user base, thanks to thier adoption in schools, and they had a reputation based on reliability on good design rather than 'feature packed'. They made an unusual step, they decided to design thier own processor for use in the new 16-bit design. They were not planning to go into the silicon business, but relying on the fact that thier 'trademark' meant more than the name of a processor in thier 'niche' market, they planned on getting a processor manufactured to thier own specs.

The UK computer industry was very much in the doldrums. Acorn was based in Cambridge, home to many of the most notable scientists and mathematicians in the world. There was certinaly no shortage of very bright people longing to get involved in computing design and not having anything to get thier hands on.

RISC processors were the big buzzword of the period. RISC means Reduced Instruction Set Computer. Over the years microprocessor designers had been adding instructions and gadgets into microprocessors to cover any remote requirement they could think off. Microprocessor instruction sets were begining to look much like swiss army knives. This was good new for the assembly language programmer, but most of the software run on computers was, by now, written in higher level languages that were then compiled into machine code. Compilers are not very good at making use of quircky and fancy features in an instruction set, and most of the 'extra' features that microprocessor designers were pumping into the CPU cores were not actually getting used.

The idea behind RISC processors is that you cut out all the spaghetti and design a processor that is optimized for running compiled code. The term Reduced Instruction Set came from the original concept that by implementing just the instruction set that a compiler typically used you could concentrate on building a leaner meaner (and hence faster) core. In reality, the real gains of this approach stem from optimizing the CPU to run the code it will eventually be running, rather than adding features and expecting software to use it, RISC designs in reality are not necessarily short on instructions. Nowdays all processor development is carried out by analyzing software requirements and chip manufacturers retain software developers who follow the bleeding edge of software development and OS design in order to better understand what developments would be most useful in getting better performance out of the latest software techniques.

But, back to Cambridge. Here we have a herd of scientific dons falling over themselves to make the simplest and most efficient 16-bit core possible for the beeb's replacement, and in the grand scheme of things they had also leapfrogged forward to 32 bit. This was the birth of the ARM architechture, but not the ARM processor as we know it today. Acorns replacement for the beeb computer was called the Archimedes. Check out Google if you want to know more as allthougth it is no longer marketed thier are a lot of enthusiasts home pages out there. It did not succeed in herediting the UK schools market. Working from the CPU up had meant that Acorm had spent a lot of time working on the Archimedes, and by the time it was ready for the market cheap PC's had arrived. As we all know, PC's took the market by shere weight and momentum. And, of course, schools were far more interested in teaching on machines that were the same as the computers that thier students would encounter in the real world.

Acorn had seen the writing on the wall for thier niche UK schools market, and realised that the best asset they had was a top notch RISC processor design. They reformed the company and spun off the processor design as a seperate entity. This was to be called Acorn Risc Machines, it was the birth of the ARM processor per se.

They still had no ideas about producing silicon. Thier idea was to sell the IP of the processor core to other semiconducter manufacturers, ARM themselves would concentrate on developing the CPU core and selling development tools and software. This meant that silicon manufacturers could widen thier portfolio of processors with very little investment and without the problem of having to build support tools for developers. The ARM chip slotted well into the embedded processor market. Flagship 16 bit designs from the major manufacturers had been competing in a features war, and many product designs had been pegged at 8 bits because of the complexity hike need to move to 16-bits. The ARM processor could achieve the performance of it's competitors with a design that was half or even a quarter the size of competitors design, and using lowwer clock speeds which yielded lowwer power consumption.

The ARM processor became very popular in embedded designs, especially mobile applications were it's low power coupled with a size that better enabled microntroller applications with all peripherals and memory on the same chip made it a winner.

ARM did not fall into the trap of widening thier product range. They concentrated on improving thier core (sic) product, indeed thier only product, the ARM processor core. Nor did they fall into the trap of a features battle with competing processors. They concentrated on thier customer basess requirements and looked for the simplest and most efficient way to meet them. They also had another trick up thier sleeve; very few people outside electronic engineering circles actually appreciated what they were doing. When xyz silicon manufacturing company produced a new processor range, or market specific IC, it was xyz corporations new IC. In fact as ARM do not do any work on peripherals or packages, there is no ARM chip, they appear as simple consultants who license some of the IP used in the IC.

That is how it has been for many years, so without picking our way througth the all the variations bit byt, let's jump to the present day to see how the product looks today.

On a 32 bit RISC CPU it comes as no suprise to find that the instructiions are 32 bit words and that an 'int' is 32 bits wide. That also means that a simple loop written in C to, say, iterate over 100 characters in a string will, unless the programmer is particularly diligent, use variables much greater than are required and opcodes that have a potential way over what is needed. The ARM core implements a double instruction set, a 16 bit one and a 32 bit one. When running in 16 bit mode Each instruction word contains two 16 bit instructions and the basic word size is 16 bit, this renders the processor far more efficient at doing the operations that only require 16 bit power (which is most on a general purpose computer). The switch to and from 16-bit mode is done when calling routines, so if a programmer is about to write a routine that will clearly only require 16 bit power, he just does a 16 bit routine instead of a 32 bit on. In a HLL such as C it is enougth to have a compiler directive for the function.

The full 32 bit instruction set, whilst being 'Reduced' or minimalistic, does include some powerfull extensions. In particlar there is a 32x32 MAC with 40 bit accumulator implemented with data pointers. What's that? An instruction that is very good for doing transforms, the operations at the heart of digital signal processing applications. On other processor families this may have been dubbed something like MegaMultiMediaPowerExtensions.

While we are on the subject of instructions sets, they have added a third alternative instruction set. Java Byte code. The use of this instruction set makes implementing and using a JVM far more efficient, allthougth of course you only need to use it when you are running Java. Such a dual personality is killer feature for high end phonesets where Java is widely embedded but must co-exist with high efficiency native apps such as codecs. It will be interesting to see if legacy computing environments ever latch on to the potential of this approach.

At this point I should point out that not all these features are implemented on all processors. The current ARM family may be broadly divded into three product groups. The ARM7 core is is still be deployed into new low end devices. A recent example from Phillips is the LPC4106. This microcontoller device comes in a 7mm wide package and costs a few dollars but hosts a complete 32 bit system with 128K Flash and 32K RAM and a selection of sysnchonous and asynchronous interfaces for connecting peripherals. It can run at up to 60Mips (i.e. considerably more powerfull than the 33MHz 386SX's that were typically used to run Windows 3.1). These low end devices generally have just the essentials and in particular do not generally have a MMU.

Moving up the scale the ARM9 is gradually taking over from the ARM 7 in new designs, starting from the top down, so you are more likely to find it in high end single app embedded devices such as GPS navigators. Many ARM9 based processors available have an MMU and come in a high pin count package with a full external bus and hence they may be used in systems with many megabytes of Flash and RAM, and are capable of running powerfull operating environments such as the ARM version of Linux.

These CPU cores communicate with the rest of the world via three interface busses. One of these is desgned for connecting memory whilst a second bus (which may run slowwer than the core, much like the PCI bus on a PC) is used to connect to the (generally) on chip peripheral devices. Allthougth the ARM standard does not include peripherals, ARM based chip manufacturers generally follow conventional solutions for peripherals unless they have reason to differ which further eases software integration and porting, not to mention the efficient re-use of licensed IP.

The third 'group' of ARM based processors is a bit different from the ARM7 and ARM9. In the mid 90's ARM worked with Digital to develop a high performance ARM chip. The basic CPU architecture and instruction set is the same but they added, amongst other things, instructiion and data caches which means they are not 100% compatible with the other devices, allthougth the the differences would only be noted in extreme hand coded assembly, and not normal compiled code. The processors were called strongARM and were later acquired by INTEL. They pursued the design and used it to replace thier not very succesful i960 line of processors. They recently renamed the product line XScale and agressivly market the device at high volume markets such as mobile phone handsets. All trace of "ARM" seems to have dissappeared form the product names and descriptions but under the skin they are still very much aligned with ARM who refer to them as technical partners.

There is a school of thought which says that in high volume embedded markets ease of software development is not an issue as software development costs only represent a small fraction of the product price. The reality is that software development costs are small because programming teams are small, and no matter how much money you have there is a limit to how many developers you can throw at a project before it starts to fall apart at the seams. This, in effect, means that ease of software development translates into less time to market, a critical element for high volume consumer devices. The scalability of the ARM archtecture from budget through to feature packed designs, means that development teams may use, and re-use, familiar and trusted hardware, software and development toolchains across a much wider range of products than is possible with many competitive designs.

The way I entitled this article implies that ARM "will be here real soon now". In reality it is allready deployed on a large scale. Automobiles, mobile phones, Videocameras, GPS navigators, printers, photocopiers, PDA's, routers not to mention more specialist areas such as Aerospace, medical equipment and instrumentation allready use ARM in large numbers and have been doing so for some time. But we do not think of it as a "general purpose computer" as we do with the x86 family or the PowerPC chip. Yet it does exist in these forms running Linux, WinCE and even "RISC OS", the operating system originally created for the Archimedes computer and still maintained as a slick MAC like OS by a small private company who sell it to 'beeb' enthusiasts. There are even a few specialist companies making desktop type computers in mini tower cases that look every bit like a PC. But it is unlikely that we will see the world full of ARM based PC's. We are more likely to see desktop computers replaced by high power mobile devices, and chip manufacters (INTEL included) are hastily developing highly integrated chips for mobile devices using the compact efficient and low power ARM architecture rather than x86 core solutions.

Wether you like it or not, your next 'personal' computing device is far more likely to be based on ARM rather than an x86 core. And that brings us to a closing thought. The IBM PC was based on a simple data entry device used in computer centres and nobody in thier wildest dreams imagined that the architecture would dominate the industry for the next 25 years (IBM market analysts projected total sales over product life at 186,000 units). This architecture looks to be replaced by a design that was born out of a low cost computer for schools project 15 years ago. In the meantime many very large projects to develop the 'successor' to the PC have fallen by the wayside. Think about this when you start your 'next big thing'!

Operating Systems

Journal Journal: The EU microsoft report 2

I just di an article submission to /. about the EU report on Microsoft (get it here) and it got rejected! I fumed a bit about thier narrowmindedness in not mentioning such an important documnet, only to find that the document was mentioned a few hours later.

Of course the real reason my submission was rejected was because thousands of other /.ers had allready submitted the same link. Unfortunately by the time I noticed there were allready hundreds of threads and there seemed little point in me adding another, yet I could see no thread that was pointing out something I wished to note.....so I am going to put it here were even less people will see it!

When I started reading the legalese at the start of the EU document I immediately cast my eye down to the page count.....300 pages! I nearly closed it there and then. Fortunately I read on, and was to be presently suprised by a document that started by describing the PC industry and market as a whole in a concise yet meticulously precise manner.

It then went on to analyse Microsofts monopolistic practices focusing on key points and with exceptional quotes; many of the EU's arguments are justified by direct quotes from Microsoft themsleves, a case of 'anything you say will be taken down and used against you in evidence'.

One of the great things abou the report is that it is the result of an enquiry by a commision that is unencumbered by all the legal tactics and technical motions that tend to swamp legally orientated investigations. It is just the facts and nothing but the facts.

But that is not why I am writing about it here. I want to mention it because it is fine example of how to write. Not for /., nor for a novel, but how to explain technical issues to ignorant management! It is also a useful resource to quote from and reference to. My little finger tells me that this document will become a noted reference point, and by levering both the style and the content it offers the possibility of converting technical rants and raves into persuasive professional documents that will get your views accepted outside of geek circles.

So put down that game pad and go read now!

Java

Journal Journal: Java !Java

Java is a valid language. Many would argue that it has an important role to play in "institutional" programming as it strives to maintain a persistent API that keeps legacy software valid in a volatile market.

Part of Java's technique for long term stability is to only target the Java Virtual Machine, that is an "adapter" that makes all hardware adapters look the same, in much the same way that audio jack adapters allow your walkman headphones to be plugged into an old mono grampaphone and a modern Dolby 5.1 soundcard.

The orginal idea was to allow programs that could be downloaded to different devices, something that has only really be achieved on mobile phones.

Many non Java users just see it as a way to write programs with a GUI that is able to look out of place on a number of platforms.

And yet both the Java language and the JVM have individual merits that can be exploited individually, yet seldom are.

Using the Java language to write institutional code that will be around for years is a GOOD THING. But I can't see any reason why, once the code has been tested and validated for the JVM, that it could not be compiled to a native platform for maximum execution efficiency.

Likewise the JVM is a useful tool that need not be confined to running Java apps, indeed as far as the "universal download and run" concept is concerned there is no particular reason to confine it's use to running classes developed in Java. Compilers that compile to Byte Code exists for just about all languages but the only one that seems to get much of an airing is Jython, the Python interpreter for the JVM. And JVM is mostly viewed as a scripting tool.

One thing I would like to see is SUN recognising the differences and launching the JVM under an RFC as a universal low level environment. On the internet the JVM has been primarily viewed as a browser plug-in, which is a neat idea but requires a collaboration that is atypical of the IT industry. And yet, if we imagined the JVM as an RFC defined universal execution environment, we could see it being embedded into server applications so that clients can upload thier JVM macros to carry out tasks on the server.

Imagine if you could upload a script to your POP3 server to process your email before downloading. Many hackers who have access to hoted machines do that with ordinary system scripts, but that is very much a hacker thing. A universal environment would mean that an ordinary user could get a "class" from an anti virus or spam filter specialist and send it to thier POP server.

Some Java DB's have got close to this concept by allowing the invocation of stored procedures and functions that are in an external classpath, which obviously could include an uploadable area. But I think there is a lot more potential.

If we look at web based archives, wikis and forums, for example. Webmasters are forever changing the interfaces to accomodate all kinds of search and browse options. I fell they should be taking on the role of librarians, concentrating on archiving and catalouging the information in a consistent and permanent format. That way users could upload thier own "personalised" webservers with built facilities to access the server data in the way they want on the server (needless to say we would be talking about long term subscribers and not casual surfers).

And what site would be a prime target for an experimental testbed? Your looking at it right now!

User Journal

Journal Journal: A blog 1

Well I have never got round to keeping a blog, so perhaps I can use my /. Journal for letting off steam.

It would be interesting to know if anybody ever reads this.......If you are, please drop me a reply!

Slashdot Top Deals

Computer programmers do it byte by byte.

Working...