Photos by Banjarconverto on Buzznet

Friday, March 31, 2006

A busy week..

This week has been a busy week for me. I can't even make a full entry. I'm still reading materials regarding ATi's implementation of physics engine using their GPU, and I have drafted something about that. Whatever it is, ATI's physics seems a bit superior compared to NVidia; that is if their GPU is able to beat PhysX in floating point operations.

Materials:

PC Perspective - ATI Physics Acceleration Gets a Voice

PC Perspective - AGEIA PhysX Physics Processor Technology Preview

Friday, March 24, 2006

GeForce 7 Series: A Small Family?

June 2005-December 2005

This is kinda funny; I was having a shower this morning, just before I went to work when something struck me hard: Did you realize that NVidia's Geforce 7 series is such a small family of graphics card for almost 6 months?

When I think about this, it is an absolute truth that whether we realize it or not, GeForce 7 only consists GeForce 7800 GTX (June 2005), 7800 GT (July) and 7800 GTX 512 (November), before we hit 2006. The fact that this family can be considered small is that those cards fall into the same category: high-end. There were no G70 cards released to fill gap inside mid-range and low-end category at all during the 2nd half of 2005.

Further thinking reveals that the reason for this scenario is relatively simple: too simple actually that we just have to think like NVidia's marketing people. Just consider this: lack of competition from ATi. That's it, I said it.

We all know that ATi had been bugged with the problems regarding their initial R520 design that forced them to delay the release of the most anticipated '32 pipelines' R520 (we thought so at first!). Finally, they managed to release X1K series in October 2005 after being delayed a few times. Although X1800 series wasn't a '32 pipelines' card as expected before, I praised ATi for offering a full fledge solution that covered low-end, mid-range and high-end on the same hardlaunch day, marking ATi's departure from being labeled as a 'paperlaunch' company.

And I believe that due to this event, NVidia responded back by releasing 7800 GTX 512 in the battle of the performance card. Yet still, there was no graphics for mid and low-end for GeForce 7 series. Once again, we were asked to wait. Opps, I almost forgot that there was 6800 GS (a 6800 GT look-alike in performance) released somewhere in November. While it successfully acted as filler between 6800 GT and vanilla 6800 (at the same time, it nicely fell into mid-range category to blast X1600XT away, during that timeframe!), the fact remains that mid-range, budgeted gamers still couldn't grab a hold at G70 enhanced features over NV40. Once again, majority of gamers had to wait.

Should I blame NVidia for this? Morally, I might want to. This play-catch thing really bothers the majority of gamers, and it created a situation that was not a win-win scenario. When a party failed to compete for a certain period of time, the other party took advantage of the situation. But how can I blame NVidia? This is just a nature of business, pure and simple.

Should I blame ATi? Far from it; they didn't purposely left a big gap in the middle of competition. Whatever fiasco that happened during the construction of R520 was unexpected; it just happened. I think ATi's excellent November launch was an attempt to compensate for the loss of competition during the 2nd half of 2005.

Just one thing: if I was on NVidia's shoes, I would have released 7300 and 7600 (even 7800 GS for AGP market) between June and December last year. That would have me secured the biggest market available without being challenged by any competing products at all. So, I wonder why NVidia didn't do this; they might have their own reason.

In the end, when it comes to a very tight competition like this, consumers are at the mercy of these companies, especially when the competition only involves two parties. When one goes down, we fall onto another party and just hoping that the situation still favour our way. Unfortunately, we don't get to see that a lot.

A third company into the competition will probably balance the situation better.

Thursday, March 23, 2006

My Gaming Casing - PowerLogic Atrix 5000

For many years, I used only decent casings for my PC. I mean purely simple tower casing without any extra stuff and 'bling' - I just didn't care.

Finally I took a chance to own a better casing.





My PC gaming rig.



My Gaming rig at dark

Im not a big fan of "'blinging' your PC to the extreme", I just like to keep it nice, simple but attractive. I found that PowerLogic Atrix 5000 is just the one for me. Cost me about RM170 though.

There are 2 best possible ways to suck air into the casing. One is at the leftside of the casing and the other is underneath the top of the casing. This excludes the rear 90 mm fan which I think is a necessity for all casings today. One 120 mm LED fan is provided for the leftside air intake, while the top spot is provided without any fan, we need to install our own.

There's an internal cylindrical plastic installed inside the left casing panel, directly above the CPU location, which is meant to draw air directly on top of the CPU heatsink/fan for cooling. For better result, this plastic can also be replaced with another fan to blast air to cool the CPU more efficiently.

When I am in a heavy gaming session at home, I can summarised that this casing works fine in eliminating the heat inside it. When I was still using X800 GTO, the heat from this graphics card is directly blown outside the casing using its dual-slot cooling solution. However, my current 7800 GT is a single-slot cooling method, so you know the rest of it. I initially thought that the casing internal temperature would be way higher, but the reading on the front LCD display showed no differences (possibly only a fraction of a degree Celcius). So, I think that's good enough.

The front LCD display is configured to display temperature, HDD RPM, CD RPM and time. Those readings are made possible by utilising the sensors supplied together with the casing. The illuminating PowerLogic logo in front can be turned on and off, so that's a bit of flexibility there anyway (this bling is useless during daytime or under a strong lighting).

Here's the actual spec from Leapfrog, the distributor of the casing:

Specifications
Multi-function Digital Controller for 2 Fans (CPU / HDD or VGA)
Temperature display for CPU, VGA, HDD and Case Temperature
Multi-function display for Clock, Date
12 cm Color LED Side Fan and 9 cm Back Fan built-in
Ultra Cooling System with multi-ways Grid + Filter System
Only 1 screw reveals the whole case
Screwless, Tool-less FDD, HDD and Expansion Slots
Screwless front panel (Snap On/Off)
Install up to 6 Fans : 2 x Rear, 2 x Front, 1 x Side, 1 x Top
High Quality 0.8 mm SECC Steel, All edges debarred for safety
Fits in all motherboard up to 244 mm
Drive Bay : 4 x 5.25 External, 2 x 3.5 External and 6 x 3.5 Internal

Well, I guess there are more to it than what I can observed. Overall, PowerLogic Atrix 5000 will definitely look stunning in the room. With such a good pricing, I find it hard to resist. Of course the internal finishing is not as good as those from Lian Li, Coolermaster (I did find some sharp edges around inside the aluminium bays), but this casing is worth buying!

Wednesday, March 22, 2006

Future NFS screenshots??

Graphics card:I wish NFS could be like this.

I wish that the next Need For Speed could look something like this!

Words of the Day : SLI Physics!

I should have included another words of the day before this: Quad SLI, but then I think that it's just another extension to SLI campaign so no biggie here.

Okay, based on the readings at HardOCP, RojakPot and a few forums, it appears that NVidia has another trick up its sleeves, waiting to be exposed all this time. Yep, physics is the way to go according to the rhythm of the future gaming.

Introduction

I think this all rooted from the idea of 'multiple processors' for PC processing power. I imagine that 'siliconised' people in Intel, AMD, NVidia and ATi had been too precipitated with the idea of 'race for more MHz". Later on, we witnessed the birth of multicore CPU as well as multiple GPUs (SLI, Quad SLI soon and Crossfire) on the mainstream computing.

With the power of multiplicity in hand, people start thinking of how to make use of those. Therefore we are served with the ideas of multithreaded applications and soon, gaming as well. The benefits of multicore CPU are already here to enjoy, and today, multiple GPU from NVidia, or SLI to be exact, adds another weapon to its gaming arsenal.

SLI Physics

I think that the collaboration between Havok and NVidia in creating this feature hits some sweet spots at various aspects. While I'm mostly requoting what many reviewers have said, I realised that many of my opinions sit on the same track as theirs.

In games, physics is one of the main components that makes up to the realism in gaming, besides graphics. Since the birth of GPU, graphics complexity has been taken care of nicely, thus what we see today is a steady progression of adding more 'eye-candy, graphics features, pixel processing, shaders, vertex, DirectX etc etc" scenario. While all is good, we miss the revolutionary in gaming. I bought my GeForce DDR 6 years ago so basically it has been since then.

Today, NVidia teams up with Havok to integrate the power of Havok FX engine into the GPU. We know that Havok implements a software approach in bringing the realism in physics, contrary to AGEIA's solution, PhysX which employs a dedicated hardware to do so.

Basically, the idea is, in SLI configuration, the second GPU is dedicated to be the physics processing power while the first GPU concentrates on the graphics as usual. The rule is that the hardware has to support SM3.0 (since this is what Havok FX use to calculate physics) and the game must also support Havoc FX. Simply said, we just have to install a proper Forceware driver (supporting Havok FX ) for our Geforce 6 or 7 series and play any supporting games to see its effects. On Havok's side however, they mentioned that this works on any SM3.0 graphics card, which means that we cannot rule out ATi's X1800 and X1900 series.

NVidia adds some further notes that the physics engine also applies to NVidia single card. They also recommend that the minimum graphics card model to implement this is GeForce 7600. Now, adding two notes above, I wonder how taxing the situation can be.

The best part of the story is, no additional hardware is needed! This collaboration hits the spot where the install base for the hardware is already huge, and I reckon the rate of adoption is going to be fast. I also reckon that this is another boost from NVidia to encourage gamers to adopt for SLI.

The Actual Truth

People immediately starts comparing SLI Physics with AGEIA's PhysX. It is then made obvious that NVidia solution does not only differ in terms of hardware/software approach, but it also goes beyond which gaming physics it is targeting. Simply said, SLI Physics acted on the physics on the game such that it will enhance the eye-candy aspects, not the entire gameplay. It's kinda difficult to put my own words here, but here's an excerpt from TechReport regarding SLI Physics:

"Unlike other approaches, Havok FX handles physics computations directly on the GPU, allowing it to calculate and render object movement with minimal readback to the CPU. However, Havok FX is limited to what's referred to as "effect physics," or physics that don't affect gameplay. Havok prefers to keep gameplay physics on the CPU, leaving Havok FX with the physics calculations necessary for visual effects and other eye candy. Fortunately, modern GPUs apparently do a pretty good job of crunching those types of physics calculations."

This is another point where it differs from PhysX. PhysX offers an all-out physics computation inside a dedicated silicon, removing it 100% from the CPU.

Anyway, I think SLI Physics removes a significantly large portion of physics calculation off the CPU, so in this department, it is not too shabby either.

The Rationality

When SLI Physics is announced, one thing immediately crop up in my mind: load balancing. How do we utilised two GPUs in SLI to perform physics effectively? Having read NVidia's implementation and a few flaming discussions, the current setup will utilise the whole second GPU to be a dedicated physics calculator.

One question: how 'whole' is the 2nd GPU going to be utilised in terms of transistors utilisation?

Looking at TechReport's excerpt above, GPUs can do a pretty good job at calculating physics, but the question remains, at what percentage of a whole GPU? I am no expert in semiconductor and CPU manufacturing, but since many parts of a GPU are tailored only for processing graphical data: pixel shaders, vertex shaders , I would say that quite a large portion of a GPU is useless for calculating physics. The only part that I think is relevant is the ALU, where the core mathematical calculation is performed. Of course, this is my own biggest mystery around the issue, someone alse out there might have the answer.

Whatever it is, this particular implementation raises mostly economical issue for SLI adopters. Is it worth sacrificing one GPU for physics, running only on one GPU for graphics in SLI? This issue is so mind-boggling that one might wonder the impact on graphics performance when switching SLI Physics on. Logical thinking would indicate that framerates will be reduced by doing so.

If the performance of that 2nd GPU can compensate for the loss of graphical computing power, that's fine. The problem is, I just don't think so. How limiting is the physics computation in gaming actually? There are no numbers that can describe this other than the pure virtue of megahertz. Still, if we say that 650 MHz core clock of 7900 GTX are used solely to calculate physics, is the performance competitive with AMD FX-57 in doing the same thing?.

Still, load balancing is the best implementation in this kind of situation. Fortunately, it will be supported soon, although not in the early version of SLI Physics. That's a relief.

AGEIA's PhysX

This is where a dedicated AGEIA's card comes into consideration. The solution seems promising in all areas except in the supporting games department, where the competition is tight. Havok has secured many game developers with excellent games, while AGEIA's, in my opinion, has Epic Games support for its most-awaited Unreal3 engine, along with a few other prominent game developers. To make things favour their way, AGEIA provides PhysX SDKs for free for all developers.

$200++ for a PhysX card? I'd say that a very good solution IF you want to compensate for your lack of affordability to buy high-end gaming CPU.

Summing It Up

I praise NVidia and Havok actually, for coming out with such solution using existing platform. An innovation at its best. We are yet to see the actual gaming rig in action under SLI Physics setup, but I hope when we do, we will know where our money goes to.

Further readings: Adrian's RojakPot and TheInquirer.

Tuesday, March 21, 2006

Personal notes on G71, G73...

I need to store this anyway:
  1. G71 - Geforce 7900 GTX, 7900 GT - 90 nm
  2. G73 - Geforce 7600 - 90 nm
  3. G72 - Geforce 7300 - 110 nm

Summarised from AnandTech:

7900 GTX:
8 vertex pipes
24 pixel pipes
16 ROPs
650 MHz core clock
1600 MHz memory data rate

512MB of memory on a 256bit bus
$500 +

7900 GT:
8 vertex pipes
24 pixel pipes
16 ROPs
450 MHz core clock
1320 MHz memory data rate
256MB of memory on a 256bit bus
$300 - $350

7600 GT:
5 vertex pipes
12 pixel pipes
8 ROPs
560 MHz core clock
1400 MHz memory data rate
256MB of memory on a 128bit bus
$180 - $230



So, basically 7900 GT is an overclocked 7800 GTX and 7900 GTX is a core-overclocked 7800 GTX 512. Their little brother, 7600 is supposed to be a replacement for 6600 GT, targeting the price/performance ratio crown.I reckon that with aggresive pricings like those and wide availability, ATi is gonna get a tough competition here. Thus we see around the Net that X1900 XT , maybe even XTX are priced at less than $500. Just show me the money!

Friday, March 17, 2006

My 7800 GT on AquaMark 3

I just throw this benchmark to see how my 7800 GT perform:

7800 GT on Aquamark 3

Friday, March 10, 2006

The peculiarity of my Aeolus 7800 GT

I was turning on my PC last night and as usual, the monitor will display the BIOS reading of the graphics card installed in the system. Initially I didn't notice this, it happened in a glimpse that somehow there's something strange about the VGA BIOS reading. I restarted the PC and with the virtue of 1 MP handphone camera, I took a snap at it real quick. Let's have a look :

7800 GT bios


There it is, one line that says it all, "Engineering Release - Not For Production Use". If I am one of AnandTech's hardware reviewers, I probably won't bother about this at all. Hardware reviewers are regularly being provided with engineering sample of a product to be reviewed.

Then how on earth that I could end up buying an engineering sample of 7800 GT in the retail market? This really boils my blood as this is the second time I think that I've been tricked into buying an 'unpure' retail graphics card. First one was the former Sapphire X800 GTO FireBlade, where I discovered that its BIOS has been tampered with. Why not? A default clock of 500/550 core/memory as reported by RaBIT? Malaysia PC retailers must have been fuggin' retarded! Not all of them, but enough to fool a few enthusiasts around! Talk about customer rights to curb unhealthy business ethique here? I must be dreaming!!

Anyway, I made some further investigation regarding the 7800 GT BIOS. I used NiBiTor from MKVTech to go into the details of the BIOS. Guess what I've found??

nibitor my BIOS

The 'Engineering Release' tickbox is ticked, which proves that my 7800 GT is indeed an engineering sample. I also made the same test with a few other 7800 BIOSes : reference NVidia card, XFX, Palit and ASUS. None of them indicated an engineering sample BIOS.

Another thing that I observed was that the BIOS revision, 05.70.02.13.00 seemed a bit odd since even the reference NVidia BIOS's revision itself was 05.70.02.13.01 while the other BIOSes were even reporting .02 and even .12 for XFX.

I guess this can be listed as the fourth reason why this card was sold very cheap for a 7800 GT.
Either the retailer or AOpen itself is responsible for this mess..

Wednesday, March 08, 2006

7800 GT overclocking myth exposed!

I was reading a review of Gainward BLISS 7800 GT 'Goes Like Hell' 512 MB (wow!) with rating clocks of 450/1300 (1.4 ns memory) at PureOverclock when their explanation of the nature of overclocking 7800 GT struck me. Most people know that not every parts in the G70 core is clocking at the same rate at any time. PureOverclock break it down further by explaining what root clock, vertex clock, shader and ROP clock mean.

I don't actually understand it fully but here's what they said about it:

"You’ll probably need some more details to understand this fully:

• The vertex clock is always 40 MHz higher than the root clock.
• The shader and ROP clocks increase every time you hit a 27 MHz oscillation of the vertex clock (a vertex clock of 486 MHz is 18x27 and so the shader and ROP clocks will increase).
• When the vertex clock hits a 27 MHz oscillation the shader and ROP clocks will be 27MHz less than the root clock when the oscillation takes place. The shader and ROP clocks will not increase until the next 27 MHz oscillation of the vertex clock occurs. Another way to calculate shader and ROP clocks would be that both of them will be 13.5 MHz higher than the root clock.

For example, if you select a root clock of 445 MHz you will have a vertex clock of 485 MHz and the shader and ROP clocks will be 432 MHz.

Why will the shader and ROP clocks be 432 MHz? Because the last 27 MHz oscillation of the vertex clock took place at 459 MHz (27x17). If we now take what we know, which is that the shader and ROP clocks will always be 13.5 MHz higher than the vertex clock was at its last oscillation we can easily predict when the shader and ROP clocks will make a jump."

Head over to the article for the full story. It's very good for GT overclockers out there like me.

Anyway, the point is that somehow Gainward's 7800 GT that's being reviewed does not exhibit this nature. It seems that all those clocks are clocking at the same clock as root clock. Whether this is an advantage or not, I don't know but certainly this card is full of surprises.

Conroe: The New Gaming CPU?

I know this is a bit off topic from the genre of the blog, but it seems highly related to gaming. Intel just announced Conroe, a dual core 65 nm CPU, which will be available at Q3 2006. Head over to AnandTech's first benchmark of the CPU and AMD's FX-60. The result? Intel kicked AMD's butt big time, especially in gaming! I'm not talking 10-15 fps difference, but 30-40 fps!

However, one thing sounds fishy to me. Both Intel and AMD rigs are PROVIDED by Intel, not AnandTech. Now that's gonna steal the thunder, isn't it? I think AT is excellent enough in preparing the benchmarking rigs of their own and being neutral to both parties. I saw many forumers questioned this part of the Intel's story.

I still wonder if this can also be considered as AGEIA's PhysX killer!

Expect to see some serious price cuts from AMD (haven't seen that for quite some time here!).

Exptect to see premium pricing from Intel again.

I don't think Socket AM2 could close the gap that much.

CPU war is heating up again...

Tuesday, March 07, 2006

7900 GTX, 7900 GT early benchmarks!!

From DailyTech:

"System Configuration:
Athlon 64 FX-55
1GB DDR400 (2x512MB)
nForce4 motherboard

All tests were performed with a 1280x1024 resolution. The cards tested claimed 1.2ns DDR3 for both cards. ForceWare 84.12 was used for testing.

GeForce 7900GTX 512MB
650mhz core clock, 1600MHz memory clock
3dmark 03: 17800
3dmark 05: 7600
3dmark 06: 5200

GeForce 7900GT 256MB
450mhz core clock, 1300MHz memory clock
3dmark 03: 13800
3dmark 05: 7000
3dmark 06: 4550
"

I don't know who conducted these benchmarks, probably DailyTech themselves, but one thing for sure, I don't see The Inquirer being mentioned anywhere. What I know is that I'm interested in 7900 GT's scores. Ok, let's see, what does it take for me to get 13800 and 7000 at 3DMark03 and '05 at 1280x1024, using my current rig? Obviously, I have to overclock my GT to 450 core and max out the memory as high as possible (2.0 ns rating, not much can be done!). Or if it's not enough, just OC the core a bit higher perhaps. Maybe the limiting factor is the CPU (mine is Venice 3000+ Rev E6!!).

3DMark06 is different, as scores due to CPU performance is also taken into calculating the final score. FX55 is to hard for me to beat I guess ( although I've read some OC freaks overclock their Venice 3000+ to get even with FX55, or beat it).

My take is that these two cards might positioned themselves in a better place compared to G70, whether in availability, pricing and of course, benchmarks. However, I have a mixed feeling whether they can regain the performance crown from ATi. With ramped up clocks and reduced core, the only thing new here is the overclocking potential.

Monday, March 06, 2006

7800 GT: 6859 in 3DMark05 benchmark!

Once again, using my gaming rig, I tested my 7800 GT at stock clock 400/550 in 3DMark05. The default setting was used (1024x768).

7800 GT on 3DMark05

That score gives me around 1000+ more than what I have with ATi X800 GTO. I also notice that the multitexturing fillrate result is better compared to that in 3DMark03.

The benchmark went smoother than before: I can enjoy Return to Proxycon test better with better framerates. Canyon Flight was also better although I don't really like that test (come on, where's the gigantic splash when that creature slammed into water??)

Friday, March 03, 2006

7800 GT: 14384 on 3DMark03!

This is the first benchmark that I used on my nVidia 7800 GT. As usual, I'm running my AMD 64 3000+ at 2250MHz and 2x512MB DDR Dual-channel RAM. The card is clocked at stock 400/ 550(1.1G).

Default 3DMark03 setting was used (1028x768 res, no AA, AF). Here's the result:

7800 GT on 3DMark03

That score brings me around 3000+ more when I changed X800 GTO with 7800 GTO. Bare in mind that the GTO was an overclocked one! One thing that I noticed is 3DMark03 still reported a slightly less multi-texturing fillrate than the theoretical ones (400 MHz X 20 = 8 Gtexels/sec).
This happens before durig X800 GTO times. I reckon the version that 3DMark05 and '06 might correct that later.

Wednesday, March 01, 2006

AOpen Aeolus 7800 GT DVD256MB

At last, it is time to equip myself with a new graphics card, the one that I've been looking for.. nVidia 7800 GT. I chose to buy AOpen Aeolus 7800 GT DVD256MB because I found it to be the cheapest 7800 GT around at Low Yat Plaza (RM1148). Forsa 7800 GT is the next best thing (RM1150) but it wasn't available at that time.

AOpen Aeolus 7800 GT

Back

The feeling was great. The price is right and I think I've waited for the right time to buy this card, although waiting for the release of G71 a.k.a. 7900 series might promise a better pricing for 7800 series.

At home, I anxiously opened the box to reveal its contents, especially the card itself. Here is what AOpen has offered me in the package:

The card itself:

AOpen 7800GT DVD256

The card looks similar to nVidia reference card without any additional features such as unique cooling system. In fact, I can't see any AOpen sticker even on the fan itself, contrary to what I saw on the reviews of AOpen Aeolus 7800 GT DVD256MB on the web.

Then, the cables and adapters. Hmmm, the cables....

AOpen 7800 GT accessories

AOpen has supplied 2 DVI to VGA adapters, one 6-pin Molex power cables and TV-Out and Component cable. I immediately notice that there's no VIVO cable here, which leads me to believe that this card is a TV-Out version from AOpen. Further investigation reveals that VIVO version is labelled as AOpen Aeolus 7800 GT DVDC256MB. The strange thing is when I checked about this in the Internet, the box for the VIVO version is exactly the same as mine, where it still printing 'DVD256MB' and not 'DVDC256MB'. I guess for those who are looking for AOpen card like mine or the VIVO version, better ask for details.

The rest of the contents are the CDs: Installation CD and bundled games: PitFall: The Lost Expedition and Second Sight.

AOpen 7800 GT installation CD

AOpen 7800 GT bundled game:

AOpen Aeolus 7800 GT Game Bundle - Pitfall

After checking all the contents, I began to understand the reason for such a cheap pricing for the card. First of all, this is not the VIVO version. Secondly, there's no VideoStudio 7 software bundled as well (since this isn't a VIVO!). And then, there's a possibility of another reason...

When I checked the specs for AOpen Aeolus 7800 GT in the official website and various reviews about it, the specs indicated that it comes with factory-overclock core freq. at 430 MHz and memory freq. at 550 (1.1GHz): this goes for both TV-Out version and VIVO version! What a good package that is! However, when I checked mine using Coolbits hacking and also RivaTuner, both reported the core and memory freq. to be 400 MHz and 500 (1 GHz) respectively, which is the reference clock by nVidia. Now how do we describe a situation like this? Is my card the third version released by AOpen, a clock-down version to justify a very cheap retail price??

Nevertheless, it didn't affect me much as I'm not hunting for VIVO functionality anyway, and it is still the cheapest 7800 GT around in Malaysia, currently. Furthermore, I run the Optimal Clock Testing in the overclocking section of the driver (I use Forceware 81.89 anyway) and I am happy to report that this card is able to run at 490/590 (1.18) optimally! According to various reviews on the Internet, this setting is able to match or beat the performance of 7800 GTX at stock clock (430/600), (can't use fillrate to fillrate comparison anymore!). I found it to be true on most benchmarks that I've read.

Currently, I'm running the card at the reference clock, and have been doing a few benchmarks so far. The experience is truly different now, and as some might say that I am making a big deal out of this, IT IS a big deal when you can run your game at full settings with framerates higher than before!

As for X800 GTO, its duty is over, and now sitting comfortably in its box. Who knows, I might need it again later!

Coming soon: benchmarks results!