Thursday, December 21, 2006

A Lot Has Happened... Not Interesting Though...

A lot has happened in the past month or two. But this stuff can barely be called interesting. Intel, as expected, launched Kentsfield and Clovertown. Nothing earth-shattering there. The only surprise, to some extent, was that Clovertown came with 1333 MHz bus. As expected, AMD launched 4x4 (or QuadFather or Quad FX), which , to a great extent, was a huge disappointment. Performance barely matching that of 2.6 GHz Kentfield with twice as much power consumption!! The upside on the Green side though was that the 65nm rolled out, and Barcelona was demonstrated. But overall, nothing earth-shattering on either side, and hence, nothing much to talk about.

In fact, if you look at Dr. Sharikou's blog, it is also getting uninteresting. Yes, in a masochist way, I do believe that his blog is interesting. But when nothing new is happening, even the great doctor is having trouble keeping up the pace...

So, what to write about?

Well, on the home front, I got LinkSys WRT54GL router, and installed dd-wrt firmware on it. The router is really cool. The configuration flexibility that this router offers at around $60 price tag is just amazing. You can assign static IP addresses to DHCP clients, set up PPTP server, configure PPPoE, and lot more. If you are in market for a new DSL/Cable router, definitely take a look at this nifty device. It is slightly more expensive than the others, but the device is well worth it. What else? I also got a 1.2 Terabyte NAS to back-up my 600 GB C2D system. I am using Acronis TrueImage 10.0 for backup. The software works great. My home network is now getting decently crowded. Once desktop, two laptops, one NAS, one VOIP router, one printer, and a PDA. All connected through a gigabit switch and the Linksys router. I love gadgets!

Sorry for not posting

Guys! I am extremely sorry for not posting. I was on vacation :) for almost a month. Will post something soon.

Monday, October 23, 2006

Intel is pushing AMD to even more niche markets

On Friday, Intel demoed its Tigerton processor--a quad-core, Core2-based beast that sits on a platform with 4 independent buses. Now that is definitely going to make some heads turn.

Let me reiterate, the Core2 microarchitecture is vastly superior to the K8 micro-architecture, with K8L addressing only some of the gaps. The only upper edge that K8L has is its interconnect architecture and integrated memory controller, that gives it better memory bandwidth and lower latency. However, with four independent FSBs, Tigerton is to 4P what Woodcrest is to DP, and we have already seen what Woodcrest is capable of (AMD even acknowledged that they are facing competition in 2P with Intel claiming they regained some lost market share in 2P). The four independent buses will vastly alleviate Intel's bandwidth problem, while large caches and smart prefethers can mostly nullify the latency advantage. Esentially, I expect Tigerton to be the new king of 4P and rule them all for a while. K8L with HT3 can show some advantage in some memory-intensive loads, but with Barcelona stuck at three HT2 links, Tigerton will have full three to six months of unchallenged supremacy.

Why does it matter? Well, frankly I don't think it matters that much to Intel as far as revenues or profits are concerned. 4P is already niche. But considering how AMD is strongly reliant on its 4P business, it has potentil of making the marketplace more difficult for AMD.

And please, don't get started on vaporware argument. Intel has shown a Tigerton system running. All that AMD has shown in a wafer containing some huge K8L dies.

Sunday, October 01, 2006

Torrenza and 4S

I honestly think that 4S is reaching the end of the road. When you start putting so many cores into a single package, who needs 4P? 4P is already such a narrow market... The increasing power of 2P will put even more pressure on this already niche market!

Consider this for a moment: If Intel adds Dempsy-style internal bus to quad-core Penryn, then the 4 cores will look like just a single load to the FSB, and arguably, Intel will be able to pack two of these qaud-cores onto the same package (if the package has enough space). So, in theory, a Dempsy-style internal bus will allow Intel to have 8-core chips by the end of next year. If this happens (I understand that that is a big if, but Intel is desperate to claim sustained leadership, thus who knows), we are looking at 2P systems with 16 cores!! Going forward, rumors are that Intel will reintroduce its hyperthreading. That will put 32 logical processors on a 2P system. This is bound to make the 4P market ridiculously niche. So what does that mean? Does Intel really need the CSI? Afterall, the dual FSB is more than sufficient for the 2P market...

In my opinion, it does need CSI. The problem is not 4P or 8P--that is a dying breed. The real problem is Torrenza--AMD's ability to couple third-party processors with its own. At IDF, Intel announced that it will open up its FSB to third parties. Pardon my french, but who gives a f&*^ing @#$%? Why would anyone want to put their co-processor on an FSB that Intel is always in a hurry to upgrade? With HT, you can arguably negotiate the different links at different speeds, and hence the third parties do not have to upgrade their HT logic to keep up with AMD. On the other hand, since the entire FSB system will be limited by the speed of the slowest component, third parties have no choice but to run with Intel or be rendered obsolete. And that is exactly what they don't want. Thus, if Intel wants to provide a Torrenza-like capability, it needs a point-to-point interconnect that can be negotiated independently. FSB just won't cut it.

Does Intel need to provide a Torrenza-like solution? Frankly, I don't know. Today there are not many co-processor applications where the co-processor has to interact with the main processor on a clock-cycle-by-clock-cycle basis. But arguably, that is because, presently there is no technology that allows a co-processor to interact with the CPU that closely. AMD's Torrenza will make that possible for the first time--and who knows, it might even catch on? Intel cannot afford to ignore Torrenza, that's the bottom line. And that is why, it needs a cache-coherent point-to-point interconnect solution. Maybe it's CSI, maybe it's something else. But they need one for sure...

Saturday, September 23, 2006

My E6700 Arrived

I immediately replaced my P4 560 with it, and overclocked it to 3.2 GHz on stock voltage (I tried 3.33, it booted, but just after log on, windows froze, 3.2 worked, and I did not have patience to play with intermidiate frequencies). Everything worked great. I ran burnin test for 12 hours with error checking enabled, it passed.

Then I started Adobe Premiere Elements, and started encoding a file. Three minutes into the process, and boom, video driver crashed. Tried again, and the same thing happened. I have ATI Radeon 600 w/256 MB onboard memory (I am not a gamer). The video driver for that card used to crash quite freuquently, but since I had updated it to their July release, it was running without quite solidly. I have another Radeon at work, though I don't know the model (it has 512 MB memory), and the display driver there also keeps crashing. Upgrading the driver hasn't helped me there.

Now I am running the E6700 back at 2.66 GHz. It is still pretty fast. But since I have tasted what 3.2 GHz looks like, it kind of feels slow now...

Why would overclocking the CPU cause display driver (and only display driver) crash? I tried locking PCIe speed at 100MHz, that didn't help either. If anyone has any suggestions, I am open to try.

The sad part is, I need the speed for video processing, and the darn driver crashes only when I start encoding with Adobe PE + a couple other programs.

At 2.66, I have encoded 3 DVDs so far, and everything seems rock solid. I am tempted to try an NVIDIA card, but what is the guarantee that that thing won't crash on me? Has anyone experienced something like this before? My previous NVIDIA card was very stable, but then, it was AGP and I was not trying to overclock. And as I have mentioned before, I am not rich, and hence I cannot spend 100 dollars on a card just to try it out...

Turns out, this was a north bridge problem afterall. Raising the VCore on NB solved the problem. The system is rock stable again. However, the max I am able to reach with 1.45v on NB is 3.1 GHz. I don't want to raise the voltage any further, and 3.1 is not that far off from 3.2.

Friday, September 22, 2006

4x4 revisited--I told you so!

Remember only a couple of days ago I doubted whether AMD would release 4x4 for very cheap? Turns out (at least seems like) I was right. AMD's 4x4 roadmap has leaked (for those who don't understand dutch, click here). Looks like 4x4 will be available only in FX variant--FX70, FX72, and FX74, coming in at 2.6GHz, 2.8GHz, and 3.0GHz. What's more, you have to buy these processors in pairs--so the upgradability argument that AMD fanboys were giving just got flushed down the drain. Each of these processors will have 2x1MB L2 cache, and a TDP of whopping 125W. Now, I do not think that lineup is exactly cheap. Despite being beaten down pretty badly, AMD has priced its FX62 today at about $800. So I do not expect them to price each of these FXes below $600 a piece, or $1200 together (maybe the lowest one at about $999 for two--but again, that is not mainstream).

Mommy, why are AMD fanboys crying? Well honey, they were expecting caviar, but AMD served them a crow!!

Tuesday, September 19, 2006

LV Clovertown at 50W TDP?

HKEPC reports that low-voltage, quad-core Clovertown (L5310) will be released at 50W TDP. That would certainly be impressive. AMD has been talking about delivering Quad-core at 80W TDP and hyping it up a lot. If Intel delivers their 50W QC almost a quarter or two before K8L arrives, that would be something (this is a big IF, since Intel hasn't announced this part). AMD will still keep on hyping up the idle power, but really, who gives crap about idle power in data centers or rendering farms? Also, the 1066 FSB will be more than enough, at least for rendering farms (check out Kentfield reviews from Tom's Hardware--1066 FSB does not cause a bottleneck on most benchmarks).

What does "same power envelope" mean anyways? You move to next-generation process, the power is expected to go down. Then you reduce the clock speed a little bit, that gives you additional power savings. Next-generation process also allows you lower voltages, reducing power even further. Everyone is doing it. Only AMD is hyping it.

Expect to see a lot of hype on idle power from AMD. Probably AMD expects their K8Ls to just sit idle? :)

Again, does 2P quad-core make 4P more irrelevant?