You are currently viewing our boards as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content, and access many other special features. Registration is fast, simple, and absolutely free, so please, <a href="/profile.php?mode=register">join our community today</a>!
The evolution of the x86 instruction set is completely chaotic. Intel and AMD are competing to make new instructions for the same purpose and the result is incompatibility. For example, the virtualization instructions are not compatible. We need a transparent decision process and an open standardization of new instruction codes. Please see my analysis at http://www.agner.org/optimize/blog/read.php?i=25. Comments are welcome.
When it comes to 'x86 Backwards compatibility'... considering that we are in the era of ubiquitous multicore CPUs, why not simply keep just *1* fat core backwards compatible, old code is less likely to take advantage of multiple cores anyway. Just think of all the possibilities... one could even mix cores, x86 + the next gen Itanium... well it basically Cell:ish, but why is not the x86 world all over this idea?
When it comes to 'x86 Backwards compatibility'... considering that we are in the era of ubiquitous multicore CPUs, why not simply keep just *1* fat core backwards compatible
What happens when I want to run 2 "old" (or just serial) programs at the same time?
Last edited by TacoBell on Wed Dec 16, 2009 8:27 am, edited 1 time in total.
The old cores were so relatively slow it makes more sense to emulate them. Look at MAME as an example. Most of the 'slowness' of its emulation is all the simulated hardware interrupts built into the emulation layer(s). If you did not emulate those same interrupts the programs that relied on them would not function correctly. Remember the old program a lot of times relied on the built in quirks of processors. When bugs were fixed in hardware it crippled the old software written to handle the bugs...
What happens when I want to run 2 "old" (or just serial) programs at the same time?
It's up to the OS and how different the cores are what the best option is, but it could even be made end-user configurable.(software emulation or only scheduled on the backwards compatible core). It is a solved problem softwarewise, it has already been done in the past, for instance it was possible to buy a PPC coprocessor addon card for 68k based 'Amigas'.
The point is, people won't be happy when buying a state of the art CPU finding all their old programs running slower, so it should run natively at 100%. But of course the "new cores" needs to be significantly faster than the old core, otherwise the needed software adaptation won't take place.
At least I would rather take 1 backwards compatible core +X significantly faster ones, than say 6 or 8 backwards compatible ones which is what we are getting now and who knows how many cores in the future? My proposed alternative will lead to improved single thread performance...
Well, maybe the reason they are not doing it, is that I took ideas from 3 such successful technologies... Amiga(RIP), Cell (development stopped), Itanium(no, I'm not saying anything, as I don't want this thread to turn into another of "those threads" ;))
It's up to the OS and how different the cores are what the best option is, but it could even be made end-user configurable.
The OS would have to catch an exception when a thread contains obsolete instructions and move the thread to another core. I don't think OS developers are too happy about making CPU-specific adaptations. The OS would have to know the details about every processor that has assymmetric cores. And it would be even harder to handle for application developers. I don't see why you want to put the burden of configuring this on the end user, who may have no technical knowledge.
The point is, people won't be happy when buying a state of the art CPU finding all their old programs running slower, so it should run natively at 100%. But of course the "new cores" needs to be significantly faster than the old core, otherwise the needed software adaptation won't take place.
I don't think 1 fat core would ever be enough. Another situation where more than one fat core would be requires is consolidation of older systems (virtualization). Dynamically clocked cores are probably a more elegant solution that hard wired circuits, possibly with a wide range of clocks to go along with a much higher core count. I would think something like Magny-Cours that could run 1 core @ 3.6GHz, 2@ 3.4, 4@ 3.0, 6 @ 2.8 all the way down to 12 @ 2.3 would be a nice compromise between high performance, robustness to load, and scalability.
Quote:
At least I would rather take 1 backwards compatible core +X significantly faster ones, than say 6 or 8 backwards compatible ones which is what we are getting now and who knows how many cores in the future? My proposed alternative will lead to improved single thread performance...
As power gating advances I don't think you alternative would necessarily lead to improved single threaded performance. Imagine the case where cores could be completely turned off, as in systems that support hot-swap processors. With all but 1 core actually off (or in some sub C6 state) the entire power budget would be available to the single core, providing the maximum single threaded perf achievable for ST processing.
Apple has replaced the machine architecture for the Macintosh twice. First from Motorola 68k to PowerPC, then from PowerPC to x86. I believe IBM is going to do the same for their mainframes.
Joined: Wed Aug 29, 2007 3:55 pm Posts: 1006 Location: Great white north
cheesybagel wrote:
Apple has replaced the machine architecture for the Macintosh twice. First from Motorola 68k to PowerPC, then from PowerPC to x86.
The Mac was briefly Motorola 88k based between the 68k and PowerPC but that was never released as a product.
Quote:
I believe IBM is going to do the same for their mainframes.
IBM appeared to head down this road a few times but still keeps turning out new generations of native z processors. Buying PSI in July would certainly be a big help if IBM finally went ahead and moved its z series to an non-native platform but I'll believe it when I see it.
Users browsing this forum: No registered users and 0 guests
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot post attachments in this forum