[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[oc] Rambling thoughts, and an offer of help



Hi,

I'm a C/C++ programmer, with 9 years experience of C, and about 21 years experience programming for various systems. I also have some experience in electronics and microelectronics (enough to be able to simulate a chip, understand what's happening, and debug the design).

My experience with "Real World" chips includes building with and using the 80x86/80x87 series, the 65x02 series, the Transputer T400 and assorted 4-bit processor kits.

My experience of chip design includes simulation through software (Pascal and C), and designing systems using conventional EE software. I have also designed a simple ALU using neural network software.

If any of this experience would be of any use to any present or future OpenCore teams, I would be more than happy to help out in any way I can. This looks like an interesting project, with a lot of exciting potential.

Ok, now on to the rambling thoughts. Feel free to ignore anything below this point.

First, in EE, there has been a lot of theoretical work on PIM (Processor In Memory) architectures. I think it might prove interesting to see if a totally open environment, unconstrained by "real world" deadlines, market pressures and budget constraints, can run with this idea any better than "conventional" manufacturers, who have generally ignored it.

Second, I have been thinking about some of the problems that Object Oriented software has been facing. The biggest is that Von Neumann and Turing architectures are very much geared towards "classical" programming styles.

Translating between two paradigms can never be efficient, and so the more OO your design, the more you -must- lose from the translation. I've talked about this some with different people, and the usual response is that high-level hardware is complex, slow and unreliable. It's cheaper and better to get more powerful conventional systems.

However, I've been thinking about this some, and have concluded that you DON'T, in fact, need high level hardware to get OO. Indeed, quite the opposite. Each method is a simple, isolated unit that stores data in a common pool along with all the other methods in that object. What you are essentially talking about here is a Macroprocessor.

(A Macroprocessor can be defined as a collection of microprocessors, with associated common memory, in a similar way that a microprocessor can be regarded as a collection of sub-units - such as the ALU - with a common pool of registers.)

A Macroprocessor would load a single object in one go, and carry out any processing involving that object in one operation. A cluster of Macroprocessors would be capable of running an OO program entirely through inter-processor communications. No references to memory, except for system I/O and swapping of instances, would ever be necessary.

Because the individual methods in an object are extremely simple, you want an extremely RISC architecture for the individual processors. There is absolutely no need for hundreds, or thousands, of instructions to search through, if your methods average 5 or 10 lines each. A definable 8- or 16-instruction RISC core would make much more sense. (If programmable architectures aren't practical for this, you could probably make do with a more generic 32-instruction RISC core, rather than a specialised, single-purpose system.)

At the Macroprocessor layer, you would want mostly calls for inter-processor communication, and some calls for loading and saving registers. Anything else should filter through directly to the relevent microprocessor.

What do others think of this heavily parallelised design? Is it worth exploring further, to see if it would offer any genuine advantages?



--== Sent via Deja.com http://www.deja.com/ ==--
Share what you know. Learn what you don't.