[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [openrisc] Patch: Reduced gdb<->target communication
On Friday 11 April 2003 02:32, Scott Furman wrote:
> arko Mlinar wrote:
> >>Under simulation w/ or1ksim, the serial target is extremely slow, partly
> >>since stub code must run on the simulator for every serial message
> >>exchange and partly because the effective baud rate of the simulated
> >>UART is quite low. I can combat this slowness by setting a very high
> >>baud rate on the simulated UART. For example, I set the divisor to run
> >>the UART at 921 kb/s which is too high a baud rate to run on the real HW
> >>but makes it a little less sluggish when simulated.
> >
> >BTW: how fast does or1ksim run on your computer?
> >Do you think we should spend more time making it faster?
>
> or1ksim seems to run at about 2 or 3 MIPs on my 2.8 GHz P4 system when I
> disable all I/O peripherals except the UARTs.
>
> A decade ago, I worked on an instruction-level simulation of a 64-bit
> workstation processor. As I recall, it simulated at almost 10 MIPs (on
> a host processor that ran at about 150 MIPs), but we spent a *lot* of
> time optimizing the simulator for speed, e.g. pre-decoding instructions,
> identifying "traces" of instructions that were branch-free and
> exception-free so that they could be executed more efficiently, and so
> forth. But, that was a different era: then the only vehicle for SW
> development for years at a time was simulation (while the HW teams
> labored to produce a chip), so it made sense to optimize the heck out of
> the simulator, even if it made the simulator code less maintainable.
> Now that it's relatively trivial to synthesize a microprocessor on an
> FPGA, I don't think that simulation speed is as critical as keeping the
> simulator code maintainable. That said, I'm guessing that there's some
> low-hanging performance fruit in the simulator that could be found if it
> was profiled.
Huh, that means each instruction loop takes ~2000 instructions. That is a lot.
If I can recall correctly, it should be around 100 instructions.
If you can some time, please take a look what could be the cause.
> >>The gdb serial protocol is very inefficient, though, since it is not a
> >>binary protocol, i.e. it is designed to be human-readable. This results
> >>in a download transfer rate of only a few KB/s when simulating the
> >>target.
> >
> >I know. Do you think we should migrate to gdb protocol instead of current
> >JTAG. In both cases we would need JTAG server, which is connected to the
> >board.
>
> Are you proposing that the protocol between the JTAG server and gdb be
> changed to the gdb serial protocol ? What would be gained by that ?
> (That's not a rhetorical question - I have some familiarity with the
> serial protocol, but just the parts of it that I have used.) It seems
> then that we would have to extend the gdb serial protocol to handle the
> or1k extensions (trace buffer, SPRs, etc.)
It would make debugger code prettyer, since we would have just one protocol.
We could remove our proprietary jtag protocol; however that would require
quite a lot of extra effort.
best regards,
Marko
--
To unsubscribe from openrisc mailing list please visit http://www.opencores.org/mailinglists.shtml