Blog Archive

Wednesday 27 August 2014

Extreme Overclock.(This is fuking crazy......)

Extreme Overclocker, John Lam, pushed his Core i7-4770K right through the 7GHz barrier on his silvered Maximus VII Gene, with a final frequency of 7193.81MHz at a massive 2.048V under liquid nitrogen.
Submitted as part of ROG’s OC Showdown | Z97 competition, John scores himself the cool $1,000 prize, congratulations! We’ll have the full OC Showdown results later this week.
Core i7 4770K WR
John-Lam-World-Record-7GHz
John-Lam-World-Record-Gene-1
John-Lam-World-Record-Gene-2

Why are high-end graphics cards so big?

Why are high-end graphics cards so big?

  • By AL-Osman on August 26, 2014 at 2:16 pm.
GPU

Share This Article

Over the weekend, AMD announced its new R9 285 graphics card (look for reviews coming soon). This GPU is essentially a slimmed down R9 280X — it’s analogous to the old Radeon 7950, except that it has less RAM (2GB instead of 3GB) and the features that AMD introduced to the R9 290X family. That means TrueAudio support and the Asynchronous Command Engines that can handle eight commands instead of just two. It also supports AMD’s XDMA engine for better multi-GPU scaling.
Sapphire wasted no time announcing two versions of the card — a small Compact Edition at 17.1cm (6.7 inches) long and a standard model at 10.3 inches long. Both are double-wide cooler models, but the Compact Edition is still far more svelte. That raises an interesting question: why are graphics cards so big, anyway?
R9-285

What big GPUs you have

The simple answer to this question is that the graphics card is far more than just the GPU — but it turns out that die size, TDP, and total card size tend to all follow each other. Below, we’ve graphed the TDP and die sizes for five different top-end Nvidia graphics cards, starting with the 55nm refresh of the GT200 family (the GTX 285) and continuing through the 28nm GTX 780 Ti.
Nvidia die sizesNvidia TDPs
Note that TDP and die size tend to follow each other at nearly the same slope. The GTX 680 is clearly the winner here from an efficiency standpoint — it was markedly smaller and drew less power than the GTX 580, but performed significantly better than that card. (I used Nvidia for this comparison because the GTX 680 stands out as an extremely well-positioned GPU that delivered a particularly strong set of efficiency and power consumption improvements at the high end).
GTX 680 Scan back
Original image by IXBT
Still, look at the back of a GTX 680 — our efficiency winner of the past six years — and you’ll see that the GPU itself takes up only a small part of the PCB. The internal square bounded by the four screw holes is the actual GPU — so what’s taking up all the rest of the board space?
gtx680-scan-front
Original image by IXBT
The front provides the answers. The memory chips sit in an array around the outside of the GPU die, surrounded by a further mesh of power circuitry. Voltage regulators, and third-party controller chips (if any were present). Most of the hardware on the card isn’t GPU. You can also see the silkscreen for the fan’s location — it’s the circular shape.
This is where physics gets in the way of our plans for tiny GTX 680s. The tighter your PCB layout, the smaller the gap between all of the various circuits. Designers can compensate to a degree by using higher-end components that draw less power, building better cooling solutions for the entire GPU, and binning parts to ensure only the best chips get used for the smaller form factors, but at the end of the day there’s a lower limit on just how small things can be. Past that point, you’re packing more and more heat into a smaller and smaller area.
The other downside to packing more components into smaller areas is that the heatsink becomes smaller in turn (there’s less area to cover). This typically means the system requires a smaller, higher RPM fan — and higher RPM fans tend, inevitably, to be louderfans. Thus, we inevitably end up with situations where making cards smaller also means making them louder and consumers won’t trade size for sound past a certain point.
R9 285

Vastly increased efficiency without dramatic power consumption

One of the improvements that shines through a comparison like this, however, is the degree to which both AMD and Nvidia have improved performance without dramatically increasing power consumption. The official TDP on Nvidia’s top-end single GPU card in 2008 was modestly lower than today (204W vs. 250W), but the GTX 780 Ti would blow the doors off any GTX 285 configuration in existence.
Granted, Intel and AMD have long stuck with 140W TDPs as a sort of unofficial maximum as well, but even Intel hasn’t delivered the same kind of increase in real-world applications as Nvidia or AMD has over the same period. Therelative rate of increase in HPC applications might be similar given that Intel has added AVX and AVX2 between 2008 and 2014, but outside of scientific computing, those instruction sets don’t give the company an overwhelming advantage.
Incidentally, the fact that we require large GPUs and PCBs is part of why APU graphics will always be in a permanent state of catchup. The GTX 780 may be fantastically efficient compared to the top GPUs of 2008, but it’s still drawing 250W with dedicated power circuitry and onboard RAM. There’s just no way to integrate that kind of configuration into a conventional socket — if there was, we’d have never needed discrete cards at all.

Thursday 31 July 2014

Overclocking

What is Overclocking?

Overclocking is the process of making a computer or component operate faster than the clock frequency specified by the manufacturer by modifying system parameters (hence the name "overclocking"). Operating voltages may also be changed (increased), which can increase the speed at which operation remains stable. Most overclocking techniques increase power consumption, generating more heat, which must be dispersed if the chip is to remain.

Why we overclock or GPU and even CPU?

The answer to this typical question is very ordinary
WE do it;to increase the operating speed of given hardware.

Results

       The trade-offs are an increase in power consumption and fan noise, the system can become unstable if the equipment is overclocked too much, and the risk of damage due to excessive overvoltage or heat generation. In extreme cases, costly and complex cooling (e.g.,water-cooling) is required.On a large number of newer Intel CPUs (those without unlocked multipliers), because of the CPU's drastic redesign (that is, the replacement of the FSB with the base clock), overclocking - if even possible - comes with high risk of system instability. Undervolting is possible to some extent (depending on motherboard design and CPU quality) and may allow a user to turn a standard voltage CPU into a low voltage CPU without having to pay more, and not be restricted by low voltage CPU's low multiplier.
The speed gained by overclocking depends largely upon the application; benchmarks for different purposes are published.
Many people overclock their hardware to improve its performance. This is practiced more by enthusiasts than professional users seeking an increase in the performance of their computers, as overclocking carries risks of less reliable functioning and damage. There are several purposes for overclocking. Overclocking allows testing over-the-horizon technologies that available component specifications are not capable of, without having to enter the expensive realm of specialized computing. For professional users, overclocking improves professional personal computing capacity, therefore allowing improved productivity. Hobbyists may enjoy building, tuning, and comparison racing their systems with standardized benchmark software. Some hobbyists purchase less expensive computer components and overclock to higher clock rates in an attempt to save money but achieve the same performance. A similar but slightly different approach to cost saving is overclocking outdated components to keep pace with new system requirements, rather than purchasing new hardware. If the overclocking stresses equipment to the point of failure, little is lost as it is fully depreciated, and would have needed to be replaced in any case.
Computer components that may be overclocked include processors (CPU), video cardsmotherboard chipsets, and RAM. Most modern CPUs increase their effective operating speeds by multiplying the system clock frequency by a factor (the CPU multiplier). CPUs can be overclocked by manipulating the CPU multiplier, and the CPU and other components can be overclocked by increasing the speed of the system clock (external clock) or other clocks (such as a front-side bus (FSB) clock). As clock speeds are increased components will ultimately stop operating reliably, or fail permanently, even if voltages are increased to maximum safe levels. The maximum speed is determined by overclocking beyond the point of instability, then accepting a slightly lower setting. Components are guaranteed to operate correctly up to their rated values; beyond there different samples may have different overclocking potential.
CPU multipliers, bus dividers, voltages, thermal loads, cooling techniques and several other factors such as individual semiconductor clock and thermal tolerances can affect the speed, stability, and safe operation of the computer.

What should be kept in mind while doing the whole process?

I am not going write a whole damp note to make you guys neglect it because it is the most important part.
Overclocking is never being a funny thing.Its very useful and harmful before you start to do it first check the following points.
Cooling;You are overclocking of course it will consume extra power and produce a lot of heat.Which must me removed.
Check your Heat sink of CPU and even GPU, for overclocking you must have powerful heat sink and remember the best heat sink are always made of Pure copper because copper is best and cheap.The silver is even better conductor the copper but we can not use it silver because silver is very expensive.

Water cooling carries waste heat to a radiator. Thermoelectric cooling devices which actually refrigerate using the Peltier effect can help with high thermal design power (TDP) processors made by Intel and AMD in the early twenty-first century. Thermoelectric cooling devices create temperature differences between two plates by running an electric current through the plates. This method of cooling is highly effective, but itself generates significant heat elsewhere which must be carried away, often by a convection-based heatsink or a water-cooling system.

Liquid Nitrogen may be used for cooling an overclocked system, when an extreme measure of cooling is needed.
Other cooling methods are forced convection and phase transition cooling which is used inrefrigerators and can be adapted for computer use. Liquid nitrogenliquid helium, and dry ice are used as coolants in extreme cases,[4] such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technologyjointly announced a new record in silicon-based chip clock rate (the rate a transistor can be switched at, not the CPU clock rate[5]) above 500 GHz, which was done by cooling the chip to 4.5 K(−268.6 °C; −451.6 °F) using liquid helium.[6] CPU Frequency World Record is 8.429 GHz as of September 2011.[7] These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can be formed on chilled components.[4] Moreover, silicon-based junction gate field-effect transistors (JFET) will degrade below temperatures of roughly 100 K (−173 °C; −280 °F) and eventually cease to function or "freeze out" at 40 K (−233 °C; −388 °F) since the silicon ceases to be semiconducting[8] so using extremely cold coolants may cause devices to fail.
Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components.[9] A good submersion liquid is Fluorinert made by 3M, which is expensive. Another option is mineral oil, but impurities such as those in water might cause it to c

Stability and functional correctness


As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk issilent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications, device drivers, or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.
A large scale field 2011 study of hardware faults causing a system crash for consumer PCs and laptops showed a 4x to 20x increase (depending on CPU manufacturer) in system crashes due to CPU failure for over-clocked computers over an 8 month period.[10]
In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor.[11] Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected.
A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.
To further complicate matters, in process technologies such as silicon on insulator (SOI), devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.[12]
In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically intensive application for testing video cards, or different math-intensive applications for testing general CPUs). Popular stress tests include Prime95EverestSuperpi, OCCT, Linpack (via the LinX and IntelBurnTest GUIs), SiSoftware SandraBOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will show up during these tests, and if no errors are detected during the test, the component is then deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable.

After the whole process always check your system.

Benchmarks are used to evaluate performance. The benchmarks can themselves become a kind of 'sport', in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime95 as this has in-built error checking and the computer fails if unstable.
Given only benchmark scores it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computingmemory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark attempt to replicate game conditions.
And there are many others applications to check the perfomance it depends upon your need,Like if you are Gamer then use 3D mark and alo check the FPS on high end games like Crysis 3 or Bf4 or NFs Rivals.beware the tempreature must be noted.

What we got?

  • The user can, in many cases, purchase a lower performance, cheaper component and overclock it to the clock rate of a more expensive component.
  • Higher performance in games, encoding, video editing applications, and system tasks at no additional expense, but with increased electrical power consumption. Overclocking can extend the useful life of older equipment.
  • Some systems have "bottlenecks," where small overclocking of a component can help realize the full potential of another component to a greater percentage than the limiting hardware is overclocked. For instance, many motherboards with AMD Athlon 64 processors limit the clock rate of four units of RAM to 333 MHz. However, the memory performance is computed by dividing the processor clock rate (which is a base number times a CPU multiplier, for instance 1.8 GHz is most likely 9×200 MHz) by a fixed integersuch that, at a stock clock rate, the RAM would run at a clock rate near 333 MHz. Manipulating elements of how the processor clock rate is set (usually lowering the multiplier), it is often possible to overclock the processor a small amount, around 100–200 MHz (less than 10%), and gain a RAM clock rate of 400 MHz (20% increase in RAM speed, though not in overall system performance).
  • Some people overclock for its own sake, for pleasure. The PCMark website and others host online communities dedicated to overclocking.

What will you lose if you try to really OVERCLOCK it?

  • The lifespan of semiconductor components can be reduced by increased voltages and heat[citation needed]. Warranties may be voided by overclocking.
  • Increased clock rates and voltages increase power consumption, increasing electricity cost and heat production. The excess heat increases the ambient air temperature within the system case, which may affect other components. The hot air blown out of the case will heat the room it is in.
  • An overclocked computer which works correctly may misbehave at future configuration changes. For example, Windows may appear to work with no problems, but when it is re-installed or upgraded, error messages may be received such as a “file copy error" during Windows Setup.[15] Microsoft says this of errors in upgrading to Windows XP: "Your computer [may be] over-clocked." Because installing Windows is very memory-intensive, decoding errors may occur when files are extracted from the Windows XP CD-ROM.
  • High-performance fans running at maximum speed used for the required degree of cooling of an overclocked machine can be noisy, some producing 50 dB or more of noise. When maximum cooling is not required, in any equipment fan speeds can be reduced below the maximum: fan noise has been found to be roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB.[16] Fan noise can be reduced by design improvements, e.g. by designing fans with aerodynamically optimized blades for smoother airflow, reducing noise to around 20 dB at approximately 1 metre[citation needed]. Larger fans rotating more slowly, which produce less noise than smaller, faster fans with the same airflow, can be used. Acoustical insulation inside the case, e.g. acoustic foam, can reduce noise. Additional cooling methods which do not use noisy fans can be used, such as liquid and phase-change cooling.
  • Some motherboards are designed to use the secondary airflow from a standard CPU fan to cool other heatsinks, such as the northbridge. If the CPU heatsink or fan is changed on such boards, other heatsinks may not be cooled sufficiently.

Risks of overclocking[edit]

  • Increasing the operation frequency of a component will usually increase its thermal output in a linear fashion, while an increase in voltage usually causes heat to increase quadratically. Excessive voltages or improper cooling may cause chip temperatures to rise almost instantaneously, causing the chip to be damaged or destroyed.
  • Exotic cooling methods used to facilitate overclocking such as water cooling are more likely to cause damage if they malfunction. Sub-ambient cooling methods such as phase-change cooling or liquid nitrogen will cause water condensation, which will cause damage unless controlled.

Limitations[edit]

Overclocking components can only be of noticeable benefit if the component is on the critical path for a process, if it is a bottleneck. If disc access or the speed of an Internetconnection limit the speed of a process, a 20% increase in processor speed is unlikely to be noticed. Overclocking a CPU will not benefit a game limited by the speed of the graphics card.
While overclocking which causes no instability is not a problem, occasional undetected errors are a serious risk for applications which must be error-free, for example scientific or financial applications.

Graphics cards.(Nvdia and ATI)

Graphics cards can be overclocked.There are utilities to achieve this, such as EVGA's Precision, RivaTunerATI Overdrive (on ATI cards only), MSI Afterburner, Zotac Firestorm on Zotac cards, and the PEG Link Mode on Asus motherboards. Overclocking a GPU will often yield a marked increase in performance in synthetic benchmarks, usually reflected in game performance.It is sometimes possible to see that a graphics card is being pushed beyond its limits before any permanent damage is done by observing on-screen artifacts. Two such discriminated "warning bells" are widely understood: green-flashing, random triangles appearing on the screen usually correspond to overheating problems on the GPU itself, while white, flashing dots appearing randomly (usually in groups) on the screen often mean that the card's RAM is overheating[citation needed]. It is common to run into one of those problems when overclocking graphics cards; both symptoms at the same time usually means that the card is severely pushed beyond its heat, clock rate, or voltage limits (If seen when not overclocked they indicate a faulty card.) If the clock speed is excessive but without overheating the artifacts are different. There is no general rule, but usually if the core is pushed too hard, black circles, or blobs appear on the screen and overclocking the video memory beyond its limits usually results in the application or the entire operating system crashing. After a reboot video settings are reset to standard values stored in the video card firmware, and the maximum clock rate of that specific card is now known.
Some overclockers apply a potentiometer to the video card to manually adjust the voltage (which invalidates the warranty). This results in much greater flexibility, as overclocking software for graphics cards is rarely able to adjust the voltage. Excessive voltage increases may destroy the video card.

Personal Experience(please read)


 I have seen a lot of guys overclocking and fucking their hardware.The overclocking dose not mean  you cross the limit of the hardware.

  • When ever you overclock do it in points like if core speed is 700mhz do it 900 or 800mhz,do not ever overclock it to 1ghz or 1.5GHz.Gamers do always do the stuff like that and destroy the whole GPU which cost them double amount.
  • And Guys not all the components are made for overclocking take care and check your RAM,GPU,Motherboard and CPU,(Intel Z family is specially for overclocking)
  • Always prefer the bios overclocking.

  • Not in your life touch the voltage control,it ruins the whole product.Its not so easy as it seems but will destroy the whole product.

From admin Usman.

Please leave comment and take care while doing this because the fact all the things are safe up to limit,dont ever cross them

Sunday 26 May 2013

Black screen error

Black screen error while starting game.

Main causes

This probelm is mainly caused by the graphics portion of the computer.Many people have reported that during playing the screen freezes then gave black screen or says the graphic driver has  stopped responding normally some people reports another similar probelm that the game not even starts.

If freezes during game.

Then it may be caused by your graphic card driver which is older or not comaptible with the verison of windows you are using for such probelm go to the microsoft for latest driver or contact the manufacture for the latest driver.Now a days the driver updater are very in so use one of them to find the correct driver for your PC.

Check this out its free and powerful.'''''Slim driver''''''

                                              

     Download

If game not starts.

This is the common type of probelm it is usually caused by your graphics card itself or the application which you are using.This type of probelm is common in the built in graphics card.To resolve this probelm reinstall the application or buy a new external graphics card but not forget to check your slot type.

                                                                       

Friday 24 May 2013

Os for Gaimng

I am getting lot of questions from another site for which is the best opreating system for games.

Now I will help you out.

First winXP

If your game is compatible with win xp and you have low system req.Better choose win xp because it is very light usally the games compatible with win7 need half RAM to run on win Xp and thw diretX 9 is still good.

High requirments professional gammers.

 As the gammers are up to Win7 Ultimate sp1 is still the beat opreating system to play games.The diretX 11 of the win7 is the seen modtly compatible with all games along with this the diretX11 gives the best eye popping graphics with smooth running of game.

Win8

Not good for games till

Now as for as win 8 is realesed it is nothing more than a updated verison of win 7 with metro strartbar icons.
For games it is trash with no more improvement.This verison of windows semms to be unable to control it self.It dosent works smoothly with no graphics improvement.

As I mention earlier its trash.

                      BY DARK GAMER

                                Admin'

  •                                              Usman Siddiq

Sunday 26 August 2012

SWIFTSHADER

Swiftshader play the latest games without graphic card.
    How to use.
Copy the D3dll from the folder and place it in game directory.
Choose your opreating system. 
DOWNLOAD it now........
                                                     DOWNLOAD NOW

Saturday 25 August 2012

DiretX 10

Use the latest DiretX for better gaming perfomance.
Use the Diret X 10 for..
Download it here.