[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [oc] Beyond Transmeta...




----- Original Message -----
From: "Marko Mlinar" <markom@opencores.org>
To: <cores@opencores.org>
Sent: Wednesday, February 12, 2003 1:45 AM
Subject: Re: [oc] Beyond Transmeta...


> > There are a lot of unknowns here so don't be so quick to assume anything
> > about power consumption. A general rule of thumb though is if you can
> > generate the same result with less work you will consume less power.
> Yes, it is too early to speak of power consumption. However, I think that
> current designs have reached very low power consumption (not just because
of
> technology!), and it will be very hard to do it better.

Don't know what the power consumption would be for 1st implementation nor
where it would go to once you get further along the learning curve. My
intuitive
feel is that it would consume less power. Something along the lines of what
biological systems (brains) consume.

> In your case, for
> example, you will need more control logic per bit that conventional CPUs,

I think this is an apples and oranges comparison. The control logic and bits
required to perform a task are not necessarily the same or related. Again,
it is too early in the learning curve to make a determination.

> you will need also bigger caches, since you have more fine grained
executions,
> etc.

Not so. There likely would be no cache. A good design could have the
entire working set in some phase of computation (transformation).

> Maybe I have some sort of mental barrier, but I don't find it as
> straightforward as you do.
>

Don't be so hard on yourself. A mental barrier is one of the easiest
things to set aside (once you cross the barrier). Once on the other
side you may find that you can contribute more than those who
crossed the barrier earlier.

> > > But even when leaving aside the implementation issues, you have will
> > > problems with loops, function calls and sw model, especially with PLD
> > > idea.
> >
> > Why think in terms of loops and function calls? Go out of the box.
> > Start with a clean sheet of paper.
> Ok, I would agree, that function calls can be cutted out.
> But loops? I don't think so. Loops are actually a way of dinamically
> duplicating pieces of code/logic. You need some sort of "loops".

You are falling into a common trap of assuming you code the same way
for both computing environments. (iow you are attempting to use the
same paradigm). This is an entirely different way of processing.

Using this visualization tool:

Consider the traditional CPU design as a six armed wonder worker
scurrying about a manufacturing plant at blazing speed to build a
complicated
product.

Consider the bit streaming method as having 10,000 one armed workers
doing trivial tasks.

If you were the architect of the two manufacturing plants you wouldn't
design the two plants the same way. And you certainly wouldn't setup
the procedure manual (program) the same way for both sites.

In the first plant you might find process loops more efficient. e.g. using
a circular conveyor to bring the partially assembled parts back to the
six armed worker.

In the second plant you would try to re-task idle workers to a next
step in the overall process.

What works for one doesn't necessarily work for the other.

> AFAIK, there
> are only FSMs and stack based loops (similar to recursion).
> In both cases you have mentioned problems of communication.
>
> > > There is also problem of debugging.
> >
> > Initial debugging would be done through emulation. Not unlike what you
do
> > now (synthesys). When the routing is proven then it would be
incorporated
> > into the larger project and tested again.
> ok, I think that would work for 90% of applications.
>
> Anyway, continue the good work, I would be very happy if I cound work on
> something not Von Neumann for a change ;)

And not Harvard either.

I would think that Alen Turing would find this a logical extension of his
Turing
Machine concept. Instead of an infinite long tape that the "simple" machine
traverses you dynamically snip and reconstitute the tape(s) and route
it(them)
through 1,000's - 1,000,000's of "simple" machines.

> On the other hand I find it a waste to throw all accumulated SW away, just
> because of new CPU design ;)
>

Me too. Just last year I threw out my collection of paper tapes. Along my
old PDP8 classes of computers, DECtapes, DECpacks. (Actually I gave
them to a collector). It was a waste to through out my accumulated software.

Nothing would likely get thrown away. At least there would be a period of
transition. At first you would use the bit stream processing for special
purpose
tasks. e.g. voice recognition or vision systems. Solving what the meaning of
life is.
That sort of thing. Something what you now dedicate ASICs or PLDs for.
Later, as the technology evolves, the architecture could emulate the older
processor design. (This follows the path of the PLD with embedding the
processor into the PLD). An evolutionary process starting with a punctuated
change.

Jim Dempsey

--
To unsubscribe from cores mailing list please visit http://www.opencores.org/mailinglists.shtml