My first encounter with an unusual processor was in early 1980's. I had to study the Burrough's B1000 mini-computers. I just couldn't find the add instruction! It didn't have one. It had a number of registers like 'sum', 'difference', 'and' and 'or'. Instructions moved data into x and y registers and picked up the result from the desired register. It had less than 50 instructions. It was just amazing what could be done on that machine with a mere 1 MB of RAM.
However, unusual hardware has a serious problem. How to get software to run on it? Even today not all applications are available on AMD X64 architecture. X64 architecture succeeded because the i386 applications could run unchanged on it and the processor remained useful even without conversion of all the programs.
There are two major concerns in converting a program – the CPU instruction set and the operating environment. It is not easy make a kde or gnome application run on, say, OS/2. Here, we will restrict ourselves to the issues related to the processor instruction set.
Once the instruction set of a CPU is known, the first thing we need is a compiler for the target CPU. The target CPU may not even be available or if available, no OS is available for it. So, this is where the cross-compilation tools become critical. GNU compiler tools are remarkable. Man page of gcc gives us an idea of the number of machine architectures supported by these tools. A clear indication that these tools are relatively easy to port to new machine architectures.
Once the compilers are present, we need to compile and link with the appropriate libraries for the target platform. Effectively, we are building a custom distribution for the target platform. Often custom build environments may be created, though we can use tools like bitbake (http://bitbake.berlios.de/manual/) and scratchbox (http://www.scratchbox.org) to speed up the cross-platform development process.
The process would be very similar to building Linux from scratch. We build the core libraries and then the kernel and the applications, following a sequence to ensure that the dependencies are resolved. However, in the case of the new instruction set, we do not expect the build to go through effortlessly.
If any assembler code is involved, that obviously needs to be converted. Any code which directly refers to or manipulates the CPU features will need to change. The kernel and the core libraries are the obvious software which will need manual attention. The largest amount of code needing revision is likely to be in device drivers. It may not be easy but the need is well defined and clearcut.
Since Linux is widely deployed on many architectures, the design has evolved in a way to make Linux run on alternate architectures is comparatively painless.
Linux was released on AMD64 earlier than the competitive operating systems.
An interesting idea is that could Linux be ported to any CPU architecture? I suspect not.
Consider the example of the Burroughs Large Systems, which are still being produced by Unisys as A series machines. This was a stack machine and had instructions like 'Add', 'Pop', and 'Push'. There is never any reference to any registers. HP and ICL also made pretty successful stack machines. On the Burroughs large systems, different models had a different number of registers and they were allocated and used by the hardware at run time. I expect the same is still true for the A series. Optimization of the hardware registers is the responsibility of the hardware and not the compiler. Another intriguing feature or this architecture is tagged memory. Integers, floats, strings had different tags. A feature which could have been a boon for dynamic programming languages like Python and Ruby but a disaster where memory is allocated and reused for any purpose.
Restricting ourselves to architectures which can run Linux, even with the compiler and the OS ready, we are less than half done.
Chances are that the applications are written in a high level language and with the compilers available, the job should be simple. The experience with AMD64 architecture indicates that this is far from correct. The fact that AMD64 could run i386 32-bit applications made the migration to the 64bit platform easier.
The problem comes in because we often make implicit assumptions about the architecture. In such cases, the programs will compile but malfunction or crash. I expect that these problems are more likely in C/C++ programs as the programmers are more likely to make the effort to 'optimize' the code in these languages.
The most common and well known issue is the 'big endian', 'little endian'. Such code will have to be modified, usually by introducing conditional compilations
The next optimization which can cause grief is the assumption that integer and pointer sizes are identical. For example, integer is 32 bit but pointer is 64 bit on AMD64.
The meaning of type long may vary and inconsistent use of types int and long can result in unexpected failures.
It is also likely that many applications may not run as well on the new architecture as the application may not make use of the additional capabilities available on the new processor. It may be that the new CPU has better instructions for multimedia which are not being used. In this case, the converted application may not be optimum but would be acceptable. However, this may not always be the case.
For example, existing applications may not be exploiting the multicore processors and may need to be redesigned using parallel algorithms to get the best performance. So, on an architecture with a hundred cores where each core having a tenth the performance of an existing processor, an application not redesigned for the new architecture may be unacceptably slow. A real example of such a scenario is probably the Cell processor, used in the Playstation 3. Porting of applications for this architecture may also involve new design ideas as well.
Linux has given us a choice in the OS, but it has provided an even wider choice in the variety of processor architecture which may be used. Embedded systems are already using Linux and the embedded platforms are becoming powerful enough that at least some desktop applications could run well and be very useful on these systems. LinuxLink radio has 2 informative podcasts on porting Linux to new boards. (See http://www.timesys.com/services/podcast.htm )
The ability to use the existing applications on the new devices is very relevant. After converting and optimizing the application for the target device, all we need to do is to redesign the user interface to match the keyboard and display options of the devices. A trivial task, right! We may have no choice as it seems very likely that the default display for the next generation of computer users may well be a 7” screen, like the OLPC or the EePC.
Other Articles >