Customize your own converter!

Sunday, 7 August 2011

Gliese 581d: A Habitable Exoplanet?

Gliese 581d: A Habitable Exoplanet?
Source: CNRS press release



Alien Life
Posted: 05/20/11
Summary: A new computer model that simulates possible exoplanet climates indicates that the planet Gliese 581d might be warm enough to have oceans, clouds and rainfall. Gliese 581d is likely to be a rocky planet with a mass at least seven times that of Earth.


Schematic of the global climate model used to study Gliese 581d. Red / blue shading indicate hot / cold surface temperatures, while the arrows show wind velocities at 2 km height in the atmosphere. © LMD/CNRS Are there other planets inhabited like the Earth, or at least habitable? The discovery of the first habitable planet has become a quest for many astrophysicists who look for rocky planets in the “habitable zone” around stars, the range of distances in which planets are neither too cold nor too hot for life to flourish.

In this quest, the red dwarf star Gliese 581 has already received a huge amount of attention. In 2007, scientists reported the detection of two planets orbiting not far from the inner and outer edge of its habitable zone (Gliese 581d and Gliese 581c). While the more distant planet, Gliese 581d, was initially judged to be too cold for life, the closer-in planet, Gliese 581c, was thought to be potentially habitable by its discoverers. However, later analysis by atmospheric experts showed that if it had liquid oceans like Earth, they would rapidly evaporate in a 'runaway greenhouse' effect similar to that which gave Venus the hot, inhospitable climate it has today.

A new possibility emerged late in 2010, when a team of observers led by Steven Vogt at the University of California, Santa Cruz, announced that they had discovered a new planet, which they dubbed Gliese 581g, or 'Zarmina's World'. This planet, they claimed, had a mass similar to that of Earth and was close to the centre of the habitable zone. For several months, the discovery of the first potential Earth twin outside the Solar System seemed to have been achieved. Unfortunately, later analysis by independent teams has raised serious doubts on this extremely difficult detection. Many now believe that Gliese 581g may not exist at all. Instead, it may simply be a result of noise in the ultra-fine measurements of stellar 'wobble' needed to detect exoplanets in this system.


Surface temperature maps for simulations of Gliese 581d assuming an atmosphere of 20 bars of CO2 and varying rotation rates. It is currently unknown whether the planet rotates slowly or has permanent day and night sides. In all cases, the temperatures allow for the presence of liquid water on the surface. © LMD/CNRS It is Gliese 581g's big brother – the larger and more distant Gliese 581d - which has been shown to be the confirmed potentially habitable exoplanet by Robin Wordsworth, François Forget and co-workers from Laboratoire de Météorologie Dynamique (CNRS/UPMC/ENS/Ecole Polytechnique) at the Institute Pierre Simon Laplace in Paris, in collaboration with a researcher from the Laboratoire d'astrophysique de Bordeaux (CNRS/Université Bordeaux 1). Although it is likely to be a rocky planet, it has a mass at least seven times that of Earth, and is estimated to be about twice its size.

At first glance, Gliese 581d is a pretty poor candidate in the hunt for life: it receives less than a third of the stellar energy Earth does and may be tidally locked, with a permanent day and night side. After its discovery, it was generally believed that any atmosphere thick enough to keep the planet warm would become cold enough on the night side to freeze out entirely, ruining any prospects for a habitable climate.

To test whether this intuition was correct, Wordsworth and colleagues developed a new kind of computer model capable of accurately simulating possible exoplanet climates. The model simulates a planet's atmosphere and surface in three dimensions, rather like those used to study climate change on Earth. However, it is based on more fundamental physical principles, allowing the simulation of a much wider range of conditions than would otherwise be possible, including any atmospheric cocktail of gases, clouds and aerosols.

To their surprise, they found that with a dense carbon dioxide atmosphere - a likely scenario on such a large planet - the climate of Gliese 581d is not only stable against collapse, but warm enough to have oceans, clouds and rainfall. One of the key factors in their results was Rayleigh scattering, the phenomenon that makes the sky blue on Earth.

In the Solar System, Rayleigh scattering limits the amount of sunlight a thick atmosphere can absorb, because a large portion of the scattered blue light is immediately reflected back to space. However, as the starlight from Gliese 581 is red, it is almost unaffected. This means that it can penetrate much deeper into the atmosphere, where it heats the planet effectively due to the greenhouse effect of the CO2 atmosphere, combined with that of the carbon dioxide ice clouds predicted to form at high altitudes. Furthermore, the 3D circulation simulations showed that the daylight heating was efficiently redistributed across the planet by the atmosphere, preventing atmospheric collapse on the night side or at the poles.


This artist's concept illustrates a young, red dwarf star surrounded by three planets. Such stars are dimmer and smaller than yellow stars like our sun. Credit: NASA/JPL-Caltech Scientists are particularly excited by the fact that at 20 light years from Earth, Gliese 581d is one of our closest galactic neighbours. For now, this is of limited use for budding interstellar colonists – the furthest-travelled man-made spacecraft, Voyager 1, would still take over 300,000 years to arrive there. However, it does mean that in the future telescopes will be able to detect the planet's atmosphere directly.

While Gliese 581d may be habitable there are other possibilities; it could have kept some atmospheric hydrogen, like Uranus and Neptune, or the fierce wind from its star during its infancy could even have torn its atmosphere away entirely. To distinguish between these different scenarios, Wordsworth and co-workers came up with several simple tests that observers will be able to perform in future with a sufficiently powerful telescope.

If Gliese 581d does turn out to be habitable, it would still be a pretty strange place to visit – the denser air and thick clouds would keep the surface in a perpetual murky red twilight, and its large mass means that surface gravity would be around double that on Earth. But the diversity of planetary climates in the galaxy is likely to be far wider than the few examples we are used to from the Solar System. In the long run, the most important implication of these results may be the idea that life-supporting planets do not in fact need to be particularly like the Earth at all.

Local Scientists Produce First Aerogel in Space

First Space-Produced Aerogel Made on Space Sciences Laboratory Rocket Flight
June 19, 1996: Aerogel is the lightest solid known to mankind, with only three times the density of air. A block the size of a human weighs less than a pound. Because of its amazing insulating properties, an inch-thick slab can safely shield the human hand from the heat of a blowtorch. A sugar-cubed size portion of the material has the internal surface area of a basketball court. As the only known transparent insulator, Aerogel is a supercritically dried gel sometimes referred to as "frozen smoke".

On April 3, 1996, the first space-produced samples of aerogels were produced by NASA on a flight of a starfire rocket. The production of such materials in space is interesting because of the strong influence of gravity on how a gel is formed. Comparison of gels manufactured in space and on the ground have shown large differences, and the production of gels in space can provide a higher-quality product with a more uniform structure.

Chemical Engineering Progress (June 1995, p 14) described "the holy grail of aerogel applications has been developing invisible insulation for use between window panes." The production of insulating and transparent windows through aerogel manufacturing in space can develop into a substantial market for residential and commercial applications. The excellent thermal properties and transparent nature of silica aerogel make it an obvious choice for super-insulating windows, skylights, solar collector covers, and specialty windows.

Space Sciences Laboratory Hosts Bill Nye, the Science Guy

October 16, 1996

This week, the Marshall Space Flight Center and the Space Sciences Laboratory are hosting Bill Nye, The Science Guy, as their crew from Seattle films for an upcoming episode of the PBS television series. Taping in SSL will occur on Wednesday, October 16 and Thursday, October 17.
Areas of science from the laboratory that will be featured on an upcoming episode of Bill Nye include Aerogel, "cool telescopes" such as BATSE and the AXAF Calibration Facility, the SSL Solar Vector Magnetograph, and the 105-meter drop tube for microgravity experimentation.
The program will also feature a dive in the Marshall Neutral Buoyancy Simulator, the large tank in which the Hubble Space Telescope repair missions are rehearsed by astronauts, as well as a visit to the Space Station Assembly facility.

First Space-Produced Aerogel Made on Space Sciences Laboratory Rocket Flight

October 8, 1996: Results are now beginning to become available from the April 3, 1996 rocket flight to produce the first space-made Aerogel. As described in the June 19, 1996 Aerogel Headline , Aerogel is the lightest solid known to mankind, with only three times the density of air. Aerogel, because of its appearence is sometimes referred to as "frozen smoke". Aerogel produced on the ground typically displays a blue haze or has a slight cloudiness to its appearence. This feature is believed to be caused by impurities and variations in the size of small pores in the Aerogel material. Scientists are trying to eliminate this haze so that the insulator might be used in window panes and other applications where transparency is important.

The Aerogel made aboard the flight of the Starfire Rocket in April has indicated that gravity effects in samples of the material made on the ground may be responsible for the adverse pore sizes and thus account for the lack of transparency. Both the diameter and volume of the pores in the space-made Aerogel appear to be between 4 and 5 times better than otherwise identically formulated ground samples. Because Aerogels are the only known transparent insulator, with typical heat conduction properties that are five times better than the next best alternative, a number of novel applications are foreseen in high performance Aerogels.

Fall Science Meeting Highlights Tethered Satellite Results

October 15, 1996

Scientists attending the Fall 1996 meeting of the American Geophysical Union will be treated to three special sessions covering scientific results obtained from the reflight of the Tethered Satellite System (TSS-1R). The conference will take place on December 18 and 19 in San Francisco, California.
The TSS-1R science mission was conducted on space shuttle flight STS-75 at the end of February 1996. During the flight, the Tethered Satellite was deployed to a distance of 12.3 miles (19.7 km) and science data was collected aboard the satellite, the space-shuttle orbiter, and from a network of ground stations monitoring the earth's ionosphere.
Five hours of tethered operation yielded a rich scientific data set. These data include tether current and voltage measurements, plasma particle and wave measurements, and visual observations for a variety of pre-planned science objectives. During the flight the conducting tether connecting the Orbiter to the satellite was severed, and large currents were observed to be flowing between the satellite and the Orbiter during the break event.
Further scientific data were obtained from the instruments on the satellite after the break, when the science and NASA support teams were able to capture telemetry from the satellite during the overflight of NASA tracking stations.
One important finding from TSS-1R has been the high level of current collected by the satellite at relatively low voltage throughout the deployed phase of the mission. Surprisingly large currents were also observed during the tether break and gas releases, indicating important new physics at play. The three Tethered Satellite sessions at the AGU meeting will cover the results of data analysis from the mission, important supporting physics insights from laboratory experiments, theoretical and numerical modeling of current collection during the mission, and the conclusions of recent studies on the future use of tethers for science in space.

Unique telescope to open the X(-ray) Files

Artist's concept of AXAF in orbit., The nested mirrors are at center behind the dotted circles.
The finest set of mirrors ever built for X-ray astronomy has arrived at NASA's Marshall Space Flight Center for several weeks of calibration before being assembled into a telescope for launch in late 1998.

The High-Resolution Mirror Assembly (HRMA), as it is known, will be the heart of the Advanced X-ray Astrophysics Facility (AXAF) which is managed by Marshall Space Flight Center. HRMA was built by Eastman Kodak and Hughes Danbury Optical Systems. In 1997-98, they will be assembled by TRW Defense and Space Systems into the AXAF spacecraft. AXAF is designed to give astronomers as clear a view of the universe in X-rays as they now have in visible light through the Hubble Space Telescope.

Indeed, one of the Hubble's recent discoveries may move near the top of the list of things to do for AXAF. Hubble recently discovered that some quasars reside within quite ordinary galaxies. Quasars (quasi-stellar objects) are unusually energetic objects which emit up to 1,000 times as much energy as an entire galaxy, but from a volume about the size of our solar system.

More clues to what is happening inside quasars may lie in the X-rays emitted by the most violent forces in the universe.

Before AXAF can embark on that mission, though, its mirrors must be measured with great precision so astronomers will know the exact shape and quality of the mirrors. Then, once the telescope is in space, they will be able to tell when they discover unusual objects, and be able to measure exactly how unusual.

These measurements will be done in Marshall's X-ray Calibration Facility, the world's largest, over the next few weeks.

AXAF will use four sets of mirrors, each set nested inside the other, to focus X-rays by grazing incidence reflection, the same principle that makes sunlight glare off clear windshields. AXAF's smallest mirror - 63 cm (24.8 in.) in diameter - is larger than the biggest - 58 cm (22.8 in.) flown on the Einstein observatory (HEAO-2) in 1978-81.

Mapping the details of the mirror will start with an X-ray source pretty much like what a dentist uses to check your teeth. But that's next week's story.

MSFC Earth-Sun Studies Featured at AGU

AGU
December 13, 1996
Fountains of electrified gases spewing from the Earth into space and pictures of the aurora during the day will be highlighted by the American Geophysical Union (AGU) annual winter conference in San Francisco Dec. 15-19.
AGU is one of the largest scientific bodies in the world and takes in everything from earthquakes to solar flares - including work by scientists at Marshall Space Flight Center's Space Sciences Laboratory (SSL) to understand what drives the aurora borealis and causes space storms that can black out cities.
At at three sessions during the AGU meeting, Marshall scientists will present their results in several papers, written with colleagues from other institutions, from the Thermal Ion Dynamics Experiment (TIDE) and the Ultraviolet Imager (UVI), two of several instruments aboard the Polar spacecraft launched in 1996.
TIDE recently confirmed that plasmas in the tail of the magnetosphere come from Earth's outer atmosphere being warmed by a flow of materials from space. The magnetosphere is formed by the Earth's magnetic field and buffers the planet from the constant wind of gases streaming from the sun.
Press briefings scheduled for the AGU Fall Meeting include:
Imaging Space Plasmas - Polar UVI and the Inner Magnetosphere Imager on which MSFC will have an important camera. Tuesday, Dec. 17, 12:45 p.m.
Sun-Earth Connections - the new era of coordinated solar-terrestrial research by scientists using Polar and other craft. Time TBD.
"There's a raging controversy over whether the magnetosphere stores energy to any degree, or just dissipates what the solar wind throws at it," said Dr. Tom Moore, director of the space plasma physics branch at SSL and principal investigator for TIDE.
Pictures from the UVI will help scientists decide whether the magnetosphere is driven directly by the solar wind, or it stores then discharges energy like a thunder cloud building a lightning charge.
"Northern winter traditionally has been the busy season for plasma scientists," said Dr. James Spann, a UVI co-investigator at SSL, "because that's when the aurora borealis is almost all in the night sky and can be viewed in visible as well as ultraviolet light."
UVI, included in three sessions at AGU, extends the busy season by letting scientists see what happens during the day. Doing this has been a challenge because the atmosphere's ozone layer reflects solar ultraviolet light that blinds most sensors. Previous instruments let scientists see parts of the daytime aurora, or the entire nightside auora. UVI aboard Polar is the first to show all of both day and nightside auroras. It does this with narrow bandpass filters - filters that admit narrowly define colors - that match lights emitted by the auroras.
UVI lets scientists measure, with precision, the energies flowing into the auroral oval. In addition to striking pictures, UVI reveals the footprint of the Earth's magnetic field lines that may stretch into deep space to several times the distance from Earth to Moon.

Free-Floating Planets May Be More Common Than Stars

May 18, 2011: Astronomers have discovered a new class of Jupiter-sized planets floating alone in the dark of space, away from the light of a star. The team believes these lone worlds are probably outcasts from developing planetary systems and, moreover, they could be twice as numerous as the stars themselves.
"Although free-floating planets have been predicted, they finally have been detected," said Mario Perez, exoplanet program scientist at NASA Headquarters in Washington. "[This has] major implications for models of planetary formation and evolution."
The discovery is based on a joint Japan-New Zealand survey that scanned the center of the Milky Way galaxy during 2006 and 2007, revealing evidence for up to 10 free-floating planets roughly the mass of Jupiter. The isolated orbs, also known as orphan planets, are difficult to spot, and had gone undetected until now. The planets are located at an average approximate distance of 10,000 to 20,000 light years from Earth.

This artist's concept illustrates a Jupiter-like planet alone in the dark of space, floating freely without a parent star. [larger image] [video]
This could be just the tip of the iceberg. The team estimates there are about twice as many free-floating Jupiter-mass planets as stars. In addition, these worlds are thought to be at least as common as planets that orbit stars. This adds up to hundreds of billions of lone planets in our Milky Way galaxy alone.
"Our survey is like a population census," said David Bennett, a NASA and National Science Foundation-funded co-author of the study from the University of Notre Dame in South Bend, Ind. "We sampled a portion of the galaxy, and based on these data, can estimate overall numbers in the galaxy."
The study, led by Takahiro Sumi from Osaka University in Japan, appears in the May 19 issue of the journal Nature. The survey is not sensitive to planets smaller than Jupiter and Saturn, but theories suggest lower-mass planets like Earth should be ejected from their stars more often. As a result, they are thought to be more common than free-floating Jupiters.
Previous observations spotted a handful of free-floating planet-like objects within star-forming clusters, with masses three times that of Jupiter. But scientists suspect the gaseous bodies form more like stars than planets. These small, dim orbs, called brown dwarfs, grow from collapsing balls of gas and dust, but lack the mass to ignite their nuclear fuel and shine with starlight. It is thought the smallest brown dwarfs are approximately the size of large planets.

A video from JPL describes the microlensing technique astronomers used to detect the orphan planets.
On the other hand, it is likely that some planets are ejected from their early, turbulent solar systems, due to close gravitational encounters with other planets or stars. Without a star to circle, these planets would move through the galaxy as our sun and others stars do, in stable orbits around the galaxy's center. The discovery of 10 free-floating Jupiters supports the ejection scenario, though it's possible both mechanisms are at play.
"If free-floating planets formed like stars, then we would have expected to see only one or two of them in our survey instead of 10," Bennett said. "Our results suggest that planetary systems often become unstable, with planets being kicked out from their places of birth."
The observations cannot rule out the possibility that some of these planets may be in orbit around distant stars, but other research indicates Jupiter-mass planets in such distant orbits are rare.
The survey, the Microlensing Observations in Astrophysics (MOA), is named in part after a giant wingless, extinct bird family from New Zealand called the moa. A 5.9-foot (1.8-meter) telescope at Mount John University Observatory in New Zealand is used to regularly scan the copious stars at the center of our galaxy for gravitational microlensing events. These occur when something, such as a star or planet, passes in front of another more distant star. The passing body's gravity warps the light of the background star, causing it to magnify and brighten. Heftier passing bodies, like massive stars, will warp the light of the background star to a greater extent,resulting in brightening events that can last weeks. Small planet-size bodies will cause less of a distortion, and brighten a star for only a few days or less.
A second microlensing survey group, the Optical Gravitational Lensing Experiment (OGLE), contributed to this discovery using a 4.2-foot (1.3 meter) telescope in Chile. The OGLE group also observed many of the same events, and their observations independently confirmed the analysis of the MOA group.

Super Storm on Saturn

May 19, 2011: NASA's Cassini spacecraft and a European Southern Observatory ground-based telescope are tracking the growth of a giant early-spring storm in Saturn's northern hemisphere so powerful that it stretches around the entire planet. The rare storm has been wreaking havoc for months and shooting plumes of gas high into the planet's atmosphere.

This false-color infrared image shows clouds of large ammonia ice particles dredged up by the powerful storm. Credit: Cassini. [more]
"Nothing on Earth comes close to this powerful storm," says Leigh Fletcher, a Cassini team scientist at the University of Oxford in the United Kingdom, and lead author of a study that appeared in this week's edition of Science Magazine. "A storm like this is rare. This is only the sixth one to be recorded since 1876, and the last was way back in 1990."
Cassini's radio and plasma wave science instrument first detected the large disturbance in December 2010, and amateur astronomers have been watching it ever since through backyard telescopes. As it rapidly expanded, the storm's core developed into a giant, powerful thunderstorm, producing a 3,000-mile-wide (5,000-kilometer-wide) dark vortex possibly similar to Jupiter's Great Red Spot.
This is the first major storm on Saturn observed by an orbiting spacecraft and studied at thermal infrared wavelengths. Infrared observations are key because heat tells researchers a great deal about conditions inside the storm, including temperatures, winds, and atmospheric composition. Temperature data were provided by the Very Large Telescope (VLT) on Cerro Paranal in Chile and Cassini's composite infrared spectrometer (CIRS), operated by NASA's Goddard Space Flight Center in Greenbelt, Md.
"Our new observations show that the storm had a major effect on the atmosphere, transporting energy and material over great distances -- creating meandering jet streams and forming giant vortices -- and disrupting Saturn's seasonal [weather patterns]," said Glenn Orton, a paper co-author, based at NASA's Jet Propulsion Laboratory in Pasadena, Calif.
The violence of the storm -- the strongest disturbances ever detected in Saturn's stratosphere -- took researchers by surprise. What started as an ordinary disturbance deep in Saturn's atmosphere punched through the planet's serene cloud cover to roil the high layer known as the stratosphere.

Thermal infrared images of Saturn from the Very Large Telescope Imager and Spectrometer for the mid-Infrared (VISIR) instrument on the European Southern Observatory's Very Large Telescope, on Cerro Paranal, Chile, appear at center and on the right. An amateur visible-light image from Trevor Barry, of Broken Hill, Australia, appears on the left. The images were obtained on Jan. 19, 2011. [more]
"On Earth, the lower stratosphere is where commercial airplanes generally fly to avoid storms which can cause turbulence," says Brigette Hesman, a scientist at the University of Maryland in College Park who works on the CIRS team at Goddard and is the second author on the paper. "If you were flying in an airplane on Saturn, this storm would reach so high up, it would probably be impossible to avoid it."
A separate analysis using Cassini's visual and infrared mapping spectrometer, led by Kevin Baines of JPL, confirmed the storm is very violent, dredging up deep material in volumes several times larger than previous storms. Other Cassini scientists are studying the evolving storm and, they say, a more extensive picture will emerge soon.

Solar Storm Warning

March 10, 2006: It's official: Solar minimum has arrived. Sunspots have all but vanished. Solar flares are nonexistent. The sun is utterly quiet.
Like the quiet before a storm.
This week researchers announced that a storm is coming--the most intense solar maximum in fifty years. The prediction comes from a team led by Mausumi Dikpati of the National Center for Atmospheric Research (NCAR). "The next sunspot cycle will be 30% to 50% stronger than the previous one," she says. If correct, the years ahead could produce a burst of solar activity second only to the historic Solar Max of 1958.
That was a solar maximum. The Space Age was just beginning: Sputnik was launched in Oct. 1957 and Explorer 1 (the first US satellite) in Jan. 1958. In 1958 you couldn't tell that a solar storm was underway by looking at the bars on your cell phone; cell phones didn't exist. Even so, people knew something big was happening when Northern Lights were sighted three times in Mexico. A similar maximum now would be noticed by its effect on cell phones, GPS, weather satellites and many other modern technologies.
Right: Intense auroras over Fairbanks, Alaska, in 1958

NASA Events

NASA Events

Review: Eee Pad tablet transforms into laptop

(AP) -- The tablet computers that compete with the iPad have mostly been uninspiring. The Eee Pad Transformer stands out with a design that isn't just copied from the iPad: It's a tablet that turns into a ...

Google Music: Definitely beta

Google has been accused of overusing the "beta" tag on products it releases early. But with its new music service - Music - the beta tag is mandatory. It's still pretty raw, judging from my experience with it today.

Microsoft trying to take another bite of the Apple?

t was recently announced that Apple, assessed at $150 billion, surpassed Google as the world’s most valuable brand. This comes a year after overtaking Microsoft as the globe’s most valuable technology ...

Google works to close security loophole in Android

Google is in the process of updating its Android operating system to fix an issue that is believed to have left millions of smartphones and tablets vulnerable to personal data leaks. ..

NASA sees Tropical Storm 04W's thunderstorms grow quickly

This TRMM satellite 3-D image shows that some thunderstorm towers near TSO4W's center of circulation were punching up to heights of over 16 km (~9.9 miles) above the ocean's surface. Credit: Credit: NASA/SSAI, Hal Pierce


Tropical Storm 04W formed from the low pressure System 98W this morning in the northwestern Pacific. NASA's Tropical Rainfall Measuring Mission (TRMM) satellite watched the towering thunderstorms in the center of the tropical storm grow to almost 10 miles (16 km) high as it powered up quickly.

galaxies

"Advanced computer techniques allow us to combine data from the individual telescopes to yield images with the sharpness of a single giant telescope, one nearly as large as Earth itself," said Roopesh Ojha at NASA's Goddard Space Flight Center in Greenbelt, Md.
The enormous energy output of galaxies like Cen A comes from gas falling toward a black hole weighing millions of times the sun's mass. Through processes not fully understood, some of this infalling matter is ejected in opposing jets at a substantial fraction of the speed of light. Detailed views of the jet's structure will help astronomers determine how they form.
The jets strongly interact with surrounding gas, at times possibly changing a galaxy's rate of star formation. Jets play an important but poorly understood role in the formation and evolution of galaxies.

Enlarge
Left: The giant elliptical galaxy NGC 5128 is the radio source known as Centaurus A. Vast radio-emitting lobes (shown as orange in this optical/radio composite) extend nearly a million light-years from the galaxy. Credit: Capella Observatory (optical), with radio data from Ilana Feain, Tim Cornwell, and Ron Ekers (CSIRO/ATNF), R. Morganti (ASTRON), and N. Junkes (MPIfR). Right: The radio image from the TANAMI project provides the sharpest-ever view of a supermassive black hole's jets. This view reveals the inner 4.16 light-years of the jet and counterjet, a span less than the distance between our sun and the nearest star. The image resolves details as small as 15 light-days across. Undetected between the jets is the galaxy's 55-million-solar-mass black hole. Credit: Credit: NASA/TANAMI/Müller et al.
NASA's Fermi Gamma-ray Space Telescope has detected much higher-energy radiation from Cen A's central region. "This radiation is billions of times more energetic than the radio waves we detect, and exactly where it originates remains a mystery," said Matthias Kadler at the University of Wuerzburg in Germany and a collaborator of Ojha. "With TANAMI, we hope to probe the galaxy's innermost depths to find out."
Ojha is funded through a Fermi investigation on multiwavelength studies of Active Galactic Nuclei.
The astronomers credit continuing improvements in the Australian Long Baseline Array (LBA) with TANAMI's enormously increased image quality and resolution. The project augments the LBA with telescopes in South Africa, Chile and Antarctica to explore the brightest galactic jets in the southern sky.

Radio telescopes capture best-ever snapshot of black hole jets (w/ video)

Enlarge
Merging X-ray data (blue) from NASA's Chandra X-ray Observatory with microwave (orange) and visible images reveals the jets and radio-emitting lobes emanating from Centaurus A's central black hole. Credit: ESO/WFI (visible); MPIfR/ESO/APEX/A.Weiss et al. (microwave); NASA/CXC/CfA/R.Kraft et al. (X-ray)
(PhysOrg.com) -- An international team, including NASA-funded researchers, using radio telescopes located throughout the Southern Hemisphere has produced the most detailed image of particle jets erupting from a supermassive black hole in a nearby galaxy.

Display Applications

Overcoming the Drawbacks of Fluorescent Lamps

Liquid crystal display (LCD), thanks to continued improvements in resolution, response rates and scalability, has become the pervasive display technology for mobile phones, monitors, notebooks, HDTVs and other consumer electronics. Since LCD panels are transmissive and emit no light of their own, they require a backlight to provide illumination. Commonly, LCD backlighting units (BLUs) employed cold cathode fluorescent lamps (CCFLs), similar to those used for commercial overhead lights, as their light source. However, CCFLs have a number of drawbacks. They require a high voltage power supply and generally are the highest power consuming component in large format displays and HDTVs. CCFLs contain mercury which has special disposal requirements and faces increasing limits on its use in many countries. Also, the space needed by CCFLs constrains how thin an LCD panel can be made. And as CCFLs are a tube-based technology, they are usually the first component to fail in an LCD display.

Light emitting diodes (LEDs) offer a semiconductor-based lighting solution which overcomes the limitations of CCFLs. With continued advancements in brightness and efficiency, LEDs are displacing CCFLs in backlighting applications, and as their price continues to drop, will take their place as a general lighting solution as well. LEDs deliver higher brightness than CCFLs and better power efficiency (more lumens per watt), use a lower-voltage power supply and generate less heat. LEDs can produce a much wider color gamut making movies and images appear more vibrant and lifelike. Because of their compact nature, LED backlights can enable ultra-slim displays and HDTVs less than half an inch thick. As a solid state component, like the other semiconductor devices in mobile phones, computers and HDTVs, LEDs have much longer lifetimes than CCFLs.

Harnessing the Benefits of LEDs

However, harnessing all the benefits of LEDs for backlighting still entails challenges. As point sources of light, LEDs can be used in an array topology in the backlight to directly illuminate the LCD panel. An array requires a high number of LEDs and therefore can be very expensive. In addition, in order to properly diffuse the light, arrays require a greater distance between the LEDs and the LCD panel, resulting in a thicker display. A thinner and more cost-effective solution is to use LEDs in an edge-lit configuration with a light guide panel (LGP) to turn the light into the viewing plane and distribute it across the display. This requires fewer LEDs but introduces the problem of maintaining uniformity of brightness over the entire backlight area. Maintaining uniformity and achieving the full benefits of edge-lit technology necessitates a high-efficiency LGP that can be economically manufactured.

XDR™ Memory Architecture

XDR™ Memory Architecture

The Rambus XDR™ memory architecture is a total memory system solution that achieves an order of magnitude higher performance than today's standard memories while utilizing the fewest ICs. Perfect for compute and consumer electronics applications, a single, 4-byte-wide, 6.4Gbps XDR DRAM component provides 25.6GB/s of peak memory bandwidth.
Key components enabling the breakthrough performance of the XDR memory architecture are:
XDR DRAM is a high-speed memory IC that turbo-charges standard CMOS DRAM cores with a high-speed interface capable of 7.2Gbps data rates providing up to 28.8GB/s of bandwidth with a single device.

HDTV Applications

HDTV Applications

“The year 2010 marks a major transition period for the US LCD TV market, when consumers increasingly are gravitating towards sets with more advanced features.” - Riddhi Patel, iSuppli Principal TV Analyst
Consumer research finds that among advanced features, HDTV buyers' top priority is picture quality. Capabilities such as full HD 1080p resolution, 480Hz frame rates, LED backlighting, 3D display, and advanced image processing and motion compensation create incredibly rich viewing experiences. Each of these capabilities demands higher levels of memory bandwidth.

In the future, consumers will expect even more. With requirements for handling multiple streams of 3D content, Ultra-High Definition (UHD) 4K picture resolution, 16-bit color and more, HDTV designers need a memory architecture that provides the highest bandwidth performance. However, even as functionality increases, OEMs will continue to face strong downward pressure on prices. Consumer focus on pricing is second only to picture quality. For this reason, achieving these advanced features while reducing BOM costs and minimizing the total number of devices used is critical.
As a result of recent government mandates and consumers’ desire to “buy green,” OEMs must also significantly reduce HDTV system power. Typical HDTV power budgets must fall by as much as 50% by 2013 in order to meet the most stringent requirements. Key to addressing power reduction is the move to LED technology for LCD backlights, and continued improvements to power efficiency of electronics components including the image processors and memory subsystem.

Gaming and Graphics Applications

Gaming and Graphics Applications

Gaming and graphics are the performance applications for processors and memory. As such, leading-edge technology debuts here and eventually migrates to mainstream computing, mobile, and consumer electronics applications over time. State-of-the-art GPUs deliver functionality including photorealistic game characters and environments, support for multiple simultaneous displays, 3D image processing and video output, and full HD 1080p resolution. In order to support this functionality, the number of graphics processor cores and transistor counts per chip are skyrocketing. High-end GPUs have over 2 billion transistors and more than 1000 graphics processor cores up from less than 100 just 5 years ago.
Historically, these performance increases have come with a commensurate rise in power consumption. However, because of thermal, power supply and cost constraints that trend cannot continue. Top-of-the-line dual-GPU graphics cards and game consoles can draw as much as 300 watts (W) of power and must allocate a significant portion of the bill-of-materials (BOM) for the cooling system. While demand for higher performance will be ever present, power efficiency will increasingly become a first-order requirement.
GPU’s must also be scalable to support a broad range of performance levels and price points. Although they are the performance drivers, high-end graphics cards make up only a small percentage of the overall market. A single GPU platform must be configurable through the use of multiple memory types, or a single memory with a wide performance range.
The combination of these factors puts tremendous demands on the graphics memory system. Bandwidth requirements for next-generation gaming and graphics systems will exceed 500 gigabytes per second (GB/s). Meanwhile the total power budget must remain constant or even decrease. Similarly, price points must remain essentially unchanged for each of the respective performance segments.

Mobile Applications

Consumers have come to expect the entertainment experience of the living room from the mobile devices they carry every day. Advanced mobile devices offer high-definition (HD) resolution video recording, multi-megapixel digital image capture, 3D gaming and media-rich web applications. To pack all that functionality in a form factor that's thin, light and delivered with a pleasing aesthetic presents a tremendous challenge for mobile device designers. Chief among these challenges is the implementation of a high-performance memory architecture that meets the power efficiency constraints of battery-operated products.

In order to support these advanced mobile devices, memory bandwidth will experience significant growth. Over the course of the next 2-3 years, mobile gaming and graphics applications will push memory bandwidth requirements to 12.8 gigabytes per second and beyond. This bandwidth must be achieved within the constraints of the available battery life and cost budget.

Understanding the Energy Consumption of Dynamic Random Access Memories

Energy consumption has become a major constraint on the capabilities of computer systems. In large systems the energy consumed by Dynamic Random Access Memories (DRAM) is a significant part of the total energy consumption. It is possible to calculate the energy consumption of currently available DRAMs from their datasheets, but datasheets don’t allow extrapolation to future DRAM technologies and don’t show how other changes like increasing bandwidth requirements change DRAM energy consumption. This paper first presents a flexible DRAM power model which uses a description of DRAM architecture, technology and operation to calculate power usage and verifies it against datasheet values. Then the model is used together with assumptions about the DRAM roadmap to extrapolate DRAM energy consumption to future DRAM generations. Using this model we evaluate some of the proposed DRAM power reduction schemes.

Terabyte Bandwidth Initiative

The Rambus Terabyte Bandwidth Initiative reflects Rambus' ongoing commitment to innovation in cutting-edge performance memory architectures to enable tomorrow's most exciting gaming and graphics products. Targeting a terabyte per second (TB/s) of memory bandwidth (1 terabyte = 1,024 gigabytes) from a single System-on-Chip (SoC), Rambus has pioneered new memory technologies capable of signaling at 20 gigabits per second (Gbps) while maintaining best-in-class power efficiency. In order to enable the transition from current generation memory architectures, Rambus has developed innovations that support both single-ended and differential memory interfaces in a single SoC package design with no additional pins.
The patented Rambus innovations that enable this breakthrough performance, unmatched power efficiency and multi-modal functionality include:
32X Data Rate – Enables high data rates while maintaining a low frequency system clock.
Fully Differential Memory Architecture (FDMA) – Improves signal integrity and reduces power consumption at high-speed operation.
FlexLink™ Command/Address (C/A) – Reduces the number of pins required for the C/A link.
FlexMode™ Interface – Provides multi-modal functionality, either single-ended or differential in a single SoC package design with no additional pins.
These innovations offer increased performance, higher and scalable data bandwidth, area optimization, enhanced signal integrity, and multi-modal capability for gaming, graphics and multi-core computing applications. With these innovations and others developed through the Terabyte Bandwidth Initiative, Rambus will provide the foundation for future memory architectures over the next decade.
Background

Graphics cards and game consoles continue to be the marquee performance products for consumers. The insatiable demand for photorealistic game play, 3D images, and a richer end-user experience is constantly pushing system and memory requirements higher. Today's high-end graphics processors support as much as 128 gigabytes per second (GB/s) of memory bandwidth, and future generations will push memory bandwidth to upwards of 1 terabyte per second (TB/s).
However, increased data rates will be only one of the challenges for future graphics processors and game consoles. Historically, as performance has increased, so have power consumption and the physical size of the processor; two trends that cannot continue unchecked due to the physical limitations for both thermals and manufacturing. Future generation gaming and graphics memory systems must be able to deliver ultra-high bandwidth without significantly increasing the power consumption or pin count over current solutions.
Innovations

Rambus' Terabyte Bandwidth Initiative incorporates breakthrough innovations to achieve 1TB/s of bandwidth on a single (SoC). These patented innovations include:
32X Data Rate transfers 32 bits of data per I/O on each clock cycle.
Asymmetric Equalization improves overall signal integrity while minimizing the complexity and cost of the DRAM device.
Enhanced Dynamic Point to Point (DPP) enables increased scaling of memory system capacity and access granularity.
Enhanced FlexPhase™ Timing Adjustment enables flexible phase relationships between signals, allowing precise on-chip alignment of data with clock.
FlexPhase circuit enhancements improve sensitivity and capability for very high performance memory systems operating at data rates of 10Gbps and higher.
FlexLink C/A is the industry's first full-speed, scalable, point-to-point command/address implemented through a single, differential, high-speed communications channel.
FlexMode Interface is a programmable assignment of signaling I/Os as data (DQ) or C/A, for either a single-ended or differential interface.
FDMA is the industry's first memory architecture that incorporates differential signaling technology on all key signal connections between the memory controller and the DRAM.
Jitter Reduction Technology improves the signal integrity of very high-speed communications links.

XDR™2 Memory Architecture

he XDR™2 memory architecture is the world's fastest memory system solution capable of providing more than twice the peak bandwidth per device when compared to a GDDR5-based system. Further, the XDR 2 memory architecture delivers this performance at 30% lower power than GDDR5 at equivalent bandwidth.

Designed for scalability, power efficiency and manufacturability, the XDR 2 architecture is a complete memory solution ideally suited for high-performance gaming, graphics and multi-core compute applications. Each XDR 2 DRAM can deliver up to 80GB/s of peak bandwidth from a single, 4-byte-wide, 20Gbps XDR 2 DRAM device. With this capability, systems can achieve memory bandwidth of over 500GB/s on a single SoC.
Capable of data rates up to 20Gbps, the XDR 2 architecture is part of the award-winning family of XDR products. With backwards compatibility to XDR DRAM and single-ended industry-standard memories, the XDR 2 architecture is part of a continuously compatible roadmap, offering a path for both performance upgrades and system cost reductions.

IBM briefly tops Microsoft in market value

A man walks past the IBM logo at the world's biggest high-tech fair, the CeBIT, in Hanover, Germany 2009. IBM briefly topped Microsoft in market value on Wall Street on Friday to become the second-largest technology company after Apple.
IBM briefly topped Microsoft in market value on Wall Street on Friday to become the second-largest technology company after Apple.

Google

Google Inc. is an American public corporation, earning revenue from advertising related to its Internet search, e-mail, online mapping, office productivity, social networking, and video sharing services as well as selling advertising-free versions of the same technologies. Google has also developed an open source web browser and a mobile operating system. The Google headquarters, the Googleplex, is located in Mountain View, California. As of March 31, 2009 (2009 -03-31)[update], the company has 19,786 full-time employees. The company is running millions of servers worldwide, which process about 1 petabyte of user-generated data every hour. Google conducts hundreds of millions of search requests every day.
Google was founded by Larry Page and Sergey Brin while they were students at Stanford University and the company was first incorporated as a privately held company on September 4, 1998. The initial public offering took place on August 19, 2004, raising $1.67 billion, implying a value for the entire corporation of $23 billion. Google has continued its growth through a series of new product developments, acquisitions, and partnerships. Environmentalism, philanthropy and positive employee relations have been important tenets during the growth of Google. The company has been identified multiple times as Fortune Magazine's #1 Best Place to Work, and as the most powerful brand in the world (according to the Millward Brown Group).
Google's mission is "to organize the world's information and make it universally accessible and useful". The unofficial company slogan, coined by former employee and Gmail's first engineer Paul Buchheit, is "Don't be evil". Criticism of Google includes concerns regarding the privacy of personal information, copyright, and censorship.

Programming

Computer programming is the iterative process of writing or editing source code. Editing source code involves testing, analyzing, and refining, and sometimes coordinating with other programmers on a jointly developed program. A person who practices this skill is referred to as a computer programmer, software developer or coder. The sometimes lengthy process of computer programming is usually referred to as software development. The term software engineering is becoming popular as the process is seen as an engineering discipline.

Computer program

A computer program (also a software program, or just a program) is a sequence of instructions written to perform a specified task for a computer.A computer requires programs to function, typically executing the program's instructions in a central processor. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, from which executable programs are derived (e.g., compiled), enables a programmer to study and develop its algorithms.
Computer source code is often written by professional computer programmers. Source code is written in a programming language that usually follows one of two main paradigms: imperative or declarative programming. Source code may be converted into an executable file (sometimes called an executable program or a binary) by a compiler and later executed by a central processing unit. Alternatively, computer programs may be executed with the aid of an interpreter, or may be embedded directly into hardware.
Computer programs may be categorized along functional lines: system software and application software. Many computer programs may run simultaneously on a single computer, a process known as multitasking

More About Disk Drives

Floppies – Although floppy drives are being phased out in some new computers, there are still millions of them out there and you should know something about them. The floppy drive has a little slot on the face of the computer cabinet, and into this slot you can slide a floppy diskette like the one shown here. One of the reasons floppy drives are still around is that it is very easy to take a floppy diskette from one system to another.
Inside the floppy diskette is a round flat disk coated with iron oxide on each side so that data can be stored on it magnetically. This disk is called a platter, and it spins underneath an electro-magnet called the write head that puts data onto the platter surface. There is another head called the read head that copies data from the platter.
Once the disk has made one complete revolution, data is written all the way around. That is called a track. The head then moves a bit and writes another circle of data to create a second track. Altogether, there are 80 tracks on each side, for a total of 160. Altogether, the floppy can hold 1.44 MB (megabytes) of data.
If we are looking for just a few bytes out of 1.44 million, it’s not enough to know which track it is in. To help narrow the search, the track is divided into 18 pieces, calledsectors, which look much like a slice of pie. Each sector holds 512 bytes of data, so if we know the track and sector number of the data we want it won’t be hard to find.
Hard Drives – On a hard drive, data is also organized into tracks and sectors. While each sector still holds 512 bytes, there can be many more tracks and sectors on a platter. There are also multiple platters, one on top of the other like a stack of pancakes. Hard drives can hold much more data than floppies, sometimes into the billions of bytes, calledgigabytes(GB).
Multiple platters require multiple read and write heads, all attached to the same arm so they move together. It’s called an actuator arm. When we are reading track number 10 on the top platter, the other heads are also positioned over track 10 of the other platters, and together all of these track 10s make up a cylinder. To specify the location of data on a hard drive it is necessary to say what cylinder, then the track and sector. Moving the heads from one cylinder to another is called a seek, and the amount of time this takes is the average seek time.
Although hard drives can hold much more data than floppies, the platters are sealed into a metal case that is fastened inside the computer cabinet, so it’s not an easy matter to move from one system to another like you can with floppies. A hard drive is sometimes called a fixed diskfor this reason.
Operating systems use a couple of different methods to keep track of what data is stored where on a drive. One common method uses a table called a File Allocation Tableor FAT, which is a section of the disk with pointers to data locations. There are two versions, calledFAT16 and FAT32. Windows NT, XP and 2000 use a similar method called NTFS.
There are two different interfaces commonly by hard drives to talk to the rest of the system. These are called IDE for Integrated Drive Electronics, and SCSI forSmallComputer System Interconnect. The technical differences are not important at this point, but you should know about the two types because they are not interchangeable.
Figuring out where the heads should go next and then moving them there is the job of some electronic circuitry called the disk controller. Every disk drive has its own controller, which may be on the motherboard or inside the drive itself, depending on the type of drive.
There are a few more things you should know about disk drives before we leave the subject. The first sector of Cylinder 0, Track 0 is called the boot sector, and it contains aMaster Boot Record (MBR) that shows whether the disk contains an operating system and the location of the code. If there is more than one operating system, the drive must be divided into multiple partitions. If not, then the whole drive will be a single partition. All of the disk space assigned to a partition is called a volume.
Another term you will encounter is a disk format. There is a high-level format, which creates a new file allocation table and is done with a FORMAT command. There is also alow-level format that creates a new pattern of sectors. A low-level format must be followed by an FDISK command to create a new Master Boot Record and partitions.
Last, we have the word media. This refers to the actual surface holding the data, which is the platter in the case of a disk drive. Because the floppy platter can be taken out of the drive, it is called removable media, while a hard drive is calledfixed media.
Other Drives – Most systems today, especially home systems, have additional storage drives that use CD or DVD discs. The technology for both is similar but DVDs hold much more data. These drives do not store data magnetically but use optical markings that are read with a laser. They are mostly used just to read data and not to write it. The full name for CD in fact is CD-ROM, which stands forCompact Disc - Read Only Memory. However, there are versions that can be used to write also, and these are called CD-RW and DVD-RW. Even so they are mostly used to write just once for permanent storage, and are not practical for constantly changing data.
Like hard drives, CD-ROM drives can use either an IDE or SCSI interface. The version of IDE for CD-ROM drives is called ATAPI, and for SCSI the CD-ROM version is ASPI.
Because the discs can be removed, CD-ROM and DVD are considered removable media. There are other types of removable media also that are not as common, such as tape drives and Zip disks, which are similar to floppies but with a storage capacity of 100 or 250 MB. Zip disks and tape drives also use the ATAPI interface.

More About Video

The monitor is a passive device that just displays the video output from the system. However, so much data is needed for the constantly changing screen display that special provisions are made for it.
The video card (or video circuitry on the motherboard) has its own RAM memory just to hold the display information, and its own ROM BIOS to control the output. Some motherboards even have a special high-speed connection between the CPU and the video. It’s called the AGP, or Accelerated Graphics Port.
The important numbers in evaluating a video display are how many distinct colors can be displayed and also the resolution, which is how many pixels the image contains across and from top to bottom. Each dot of color making up the image is one pixel. As video technology evolved there have been a number of standards, and each one has its own set of initials like EGA, CGA or VGA. A common one isSVGA, which stands for SuperVideo Graphics Array and has a resolution of 800x600 (that’s 800 pixels across and 600 down). Some high-performance monitors use SXGA (1280x1024) or even UXGA with a resolution of 1600x1200.

CMOS and RTC

There is other start-up information that normally stays the same but that we might want to change once in a while. This includes info about the various pieces of hardware connected to the system, which disk drive to check first for the operating system and that sort of thing. This data can’t be stored on the hard drive because we need it to boot up. It can’t be stored in RAM because it will be lost at power-off, and it can’t be stored in the BIOS because we might need to change it.
The problem is solved by a type of RAM chip that uses very low power, and it is connected to a battery. This type of low-power memory chip is called CMOS. It stands for the type of technology used in the chip, which is Complementary MetalOxideSubstrate. This is probably more than you need to know, but I’m a fanatic about defining things. By the way, since batteries don’t last forever, if you leave your computer unplugged for about 5 years you’ll find it needs a bit of trickery to get it to boot again, because the CMOS information will be gone.
There is another feature in the computer that has the same requirements as CMOS, and that is the date and time function. This obviously needs to change very minute, but we don’t want to lose track when the computer is turned off. The circuitry for this is called the RTC or Real Time Clock, and for convenience it is usually included in the same chip with the CMOS. A little trickle of juice from the CMOS battery keeps the clock running, and when you turn the computer on again it knows exactly what time and day it is. Convenient, isn’t it?

The BIOS

As we mentioned earlier, the computer knows what to do by taking instructions from programs stored in RAM. The main instructions come from a program called the operating system, and those instructions direct traffic for other programs called applications.
When the computer is turned off, all the instructions copied into the RAM are gone. When the system is turned on again, it needs to go out to the disk, get the operating system and load it into RAM, but there are no instructions in the RAM to tell it how to do this. The solution to this problem is a set of instructions that stay in memory and don’t get lost when the computer is turned off.
This set of instructions is called the BIOS, for Basic Input Output System. Since the instructions don’t need to change, they can be stored in a different kind of chip than we use for RAM. It’s called ROM, for Read Only Memory. We say that the instructions in the BIOS are hard-wired, and instead of software they are calledfirmware.
The computer goes through a process called booting up when it is first turned on. This involves executing the BIOS instructions, loading the operating system from disk into RAM, and then turning control of the computer over to the operating system after everything checks out OK. The term refers to somebody pulling themselves up by their own bootstraps (without outside help, in other words). Any computer term that includes ‘boot’ will have something to do with this start-up process.

The perfect gaming computer at the right price: How to find the parts

It can be difficult to find a computer that meets all your gaming needs. Gaming technology is constantly improving and systems can go from state-of-the-art to obsolete in a couple of years. Buying a computer presents a hassle as well. So many components determine the quality of a gaming system, how do you decide which computer to purchase without comparing every piece of hardware?
The answer is simple — decide your computer’s components for yourself and build your own gaming rig.
Building a computer is much simpler than it sounds. You need only to find the ideal components for your rig and assemble them.
Shopping for computer parts might seem intimidating, but it can be worth it. More than that, buying your own parts individually can help you save money.
The internal components you will need for your computer are a motherboard, CPU processor, hard drive, memory, graphics card and sound card. You will also need a case, monitor, keyboard, mouse and speakers.
You will want your motherboard and case to be compatible. Some motherboards are reduced in size to fit smaller cases. Once you’ve chosen a motherboard, you will want to choose the right CPU chip. For gaming purposes, you want to decide what your priorities are. Do you intend to overclock? Do you want to be able to play the most graphically advanced games for years to come, or just run your current library at decent speeds? Either way, multi-core processors are the way to go.
When choosing a hard drive, don’t skimp on space. Purchase at least 400 GB of space. You might also want to invest in a smaller SSD drive to use as a boot drive, while keeping most of your data on a separate hard drive.
If you are “future-proofing” your computer, you might want to go with a fast quad-core CPU. Dual-core processors, however, can handle most games at a significantly reduced price. A computer with a 3.2 GHz Dual-Core processor, for example, can run most games at advanced graphics settings with a good graphics card.
Selecting your graphics card can also be tricky at first. Remember one thing - graphics card companies release new products every year at high prices. That reduces the prices of their previous lines, which are still capable of running games. Older cards, such as the later cards in Nvidia’s 8 and 9 series, are capable of running most games and can be found at very affordable prices. For future proofing purposes, shell out a bit more money, research the latest lines of graphics cards and buy last year’s releases. They will last you quite a long time.
Choosing RAM is less tricky. Again, if you wish to overclock, make sure you choose a brand designed to do so. Otherwise, peruse customer reviews and find a reliable brand that fits your budget. You will want at least 4 GB of RAM to ensure a quality gaming experience. The more the better.
Sound cards are not a major point of concern, for the most part. Anything that fits your motherboard can work, unless you are going for a home theater experience. This is up to you. If you wish to cut costs, an inexpensive sound card can cost about $30 and give you all the sound you’ll need.
Avoid expensive cases. A good, spacious case shouldn’t cost more than $100.
All told, a quality gaming rig should not cost you more than $1,200 to $1,400. Because individual parts are cheaper when purchased separately, building your own rig will cost you less. You will also get the satisfaction of running all the latest games on your computer and admiring your own handiwork. It’s a win-win situation.