Photo of the moment

Feeding swans in Krakow by MarcinRyczek

Table of contents



Error: Twitter did not respond. Please wait a few minutes and refresh this page.


September 2014
« Aug    

Astrophysicists report radioactive cobalt in supernova explosion

Astrophysicists have detected the formation of radioactive cobalt during a supernova explosion, lending credence to a corresponding theory of supernova explosions.
Nebula emerged from Kepler supernova remnants. Credit: NASA/ESA/JHU/R.Sankrit & W.Blair

Nebula emerged from Kepler supernova remnants.
Credit: NASA/ESA/JHU/R.Sankrit & W.Blair

The article’s main author, Yevgeny Churazov (Space Research Institute of the Russian Academy of Sciences), together with his co-authors, including Sergei Sazonov of the Space Research Institute and MIPT, reported the results of their analysis of data collected with the INTEGRAL gamma-ray orbital telescope, which they used to detect the radioactive isotope cobalt-56(56Co).

Isotope 56Co has a half-life of just 77 days, and does not exist in normal conditions. However, during a giant thermonuclear explosion of a supernova, this short-lived radioactive isotope is produced in large quantities. Radiating cobalt was registered at the supernova SN2014J, located 11 million light-years from Earth.

Astrophysicists never obtained similar spectra before. The reason was the rarity of explosions at such a distance — 11 million light-years is a large value on the galactic scale (the diameter of a galaxy is about 100,000 light-years, the distance between stars is a few light-years), but on an intergalactic scale it is a relatively short distance. There are several hundreds of galaxies within a radius of ten million light-years; supernovae produce explosions like this (type Ia explosions) once every few centuries in a galaxy. For example, a type Ia supernova last exploded in the Milky Way in 1606.

SN2014J was registered on January 21, 2014 by astronomer Steve Fossey and a group of students from University College London in the galaxy M82. Fossey reported the discovery, and several observatories, including INTEGRAL, started observations immediately. Russian researchers spent a million seconds of their quota for the use of the INTEGRAL telescope to study the supernova. In addition to the spectra, they obtained data on how the brightness of radiation changes over time.

According to a theory that was developed earlier, during an explosion of the Ia type, the remnants of a star barely radiate in the gamma range the first dozens of days. The star’s shell is opaque in this region of the spectrum; a supernova begins to produce gamma radiation only after the outer layer becomes sufficiently rarefied. By that time, radioactive nickel-56 with a half-life of 10 days, synthesized during the explosion, transforms into radioactive cobalt-56, the lines of which were detected by the researchers.

The essence of spectral analysis remains unchanged whatever the nature of radiation. For light, X-rays and even radio waves, scientists first plot a graph of a spectrum, or the relationship of intensity and frequency (or, equivalently, wavelength: wavelength is inversely proportional to frequency).

The graph’s shape indicates the nature of the source of radiation and through what environment the radiation has passed. Spectral lines, or sharp peaks on such graphs, correspond to certain events like the emission or absorption of quanta by atoms during transition from one energy level to another.

During formation, cobalt-56 had a surplus of energy, exhausted in the form of gamma rays with energies of 847 keV and 1237keV; other isotopes produced radiation with quanta of different energies and thus could not be confused with cobalt-56.

The data collected by the INTEGRAL telescope also allowed the researchers to assess how much radioactive cobalt was emitted during the explosion — the equivalent of about 60% of the Sun’s mass.

Over time, cobalt-56 turns into the most common isotope of iron, 56Fe.56Fe is the most common isotope because it can be obtained from nickel emitted during supernovae explosions (nickel turns into cobalt, and cobalt turns into iron).

Thus, the new results back up simulations of supernovae explosions and also confirm that our planet consists of matter that has gone through thermonuclear explosions of an astronomical scale.

Reference: E. Churazov, R. Sunyaev, J. Isern, J. Knödlseder, P. Jean, F. Lebrun, N. Chugai, S. Grebenev, E. Bravo, S. Sazonov, M. Renaud. Cobalt-56 γ-ray emission lines from the type Ia supernova 2014J. Nature, 2014; 512 (7515): 406 DOI: 10.1038/nature13672

Astronomy: Radio telescopes settle controversy over distance to Pleiades

A worldwide network of radio telescopes measured the distance to the famous star cluster the Pleiades to an accuracy within 1 percent. The result resolved a controversy raised by a satellite’s measurement that now is shown to be wrong. The incorrect measurement had challenged standard models of star formation and evolution.
With parallax technique, astronomers observe object at opposite ends of Earth's orbit around the Sun to precisely measure its distance. Credit: Alexandra Angelich, NRAO/AUI/NSF

With parallax technique, astronomers observe object at opposite ends of Earth’s orbit around the Sun to precisely measure its distance.
Credit: Alexandra Angelich, NRAO/AUI/NSF

The astronomers studied the Pleiades, the famous “Seven Sisters” star cluster in the constellation Taurus, easily seen in the winter sky. The cluster includes hundreds of young, hot stars formed about 100 million years ago. As a nearby example of such young clusters, the Pleiades have served as a key “cosmic laboratory” for refining scientists’ understanding of how similar clusters form. In addition, astronomers have used the measured physical characteristics of Pleiades stars as a tool for estimating the distance to other, more distant, clusters.

Until the 1990s, the consensus was that the Pleiades are about 430 light-years from Earth. However, the European satellite Hipparcos, launched in 1989 to precisely measure the positions and distances of thousands of stars, produced a distance measurement of only about 390 light-years.

“That may not seem like a huge difference, but, in order to fit the physical characteristics of the Pleiades stars, it challenged our general understanding of how stars form and evolve,” said Carl Melis, of the University of California, San Diego. “To fit the Hipparcos distance measurement, some astronomers even suggested that some type of new and unknown physics had to be at work in such young stars,” he added.

To solve the problem, Melis and his colleagues used a global network of radio telescopes to make the most accurate possible distance measurement. The network included the Very Long Baseline Array (VLBA), a system of 10 radio telescopes ranging from Hawaii to the Virgin Islands; the Robert C. Byrd Green Bank Telescope in West Virginia; the 1,000-foot-diameter William E. Gordon Telescope of the Arecibo Observatory in Puerto Rico; and the Effelsberg Radio Telescope in Germany.

“Using these telescopes working together, we had the equivalent of a telescope the size of the Earth,” said Amy Miouduszewski, of the National Radio Astronomy Observatory (NRAO). “That gave us the ability to make extremely accurate position measurements — the equivalent of measuring the thickness of a quarter in Los Angeles as seen from New York,” she added.

The astronomers used this system to observe several Pleiades stars over about a year and a half to precisely measure the apparent shift in each star’s position caused by the Earth’s rotation around the Sun. Seen at opposite ends of the Earth’s orbit, a star appears to move slightly against the backdrop of more-distant cosmic objects. Called parallax, the technique is the most accurate distance-measuring method astronomers have, and relies on simple trigonometry.

The result of their work is a distance to the Pleiades of 443 light-years, accurate, the astronomers said, to within one percent. This is the most accurate and precise measurement yet made of the Pleiades distance.

“This is a relief,” Melis said, because the newly-measured distance is close enough to the pre-Hipparcos distance that the standard scientific models of star formation accurately represent the stars in the Pleiades.

“The question now is what happened to Hipparcos?” Melis said. Over four years of operation, the spacecraft measured distances to 118,000 stars. The cause of its error in measuring the distance to the Pleiades is unknown. Another spacecraft, Gaia, launched in December of 2013, will use similar technology to measure distances of about one billion stars.

“Radio-telescope systems such as the one we used for the Pleiades will provide a crucial cross-check to insure the accuracy of Gaia’s measurements,” said Mark Reid, of the Harvard-Smithsonian Center for Astrophysics.

Many ancient cultures, including Native Americans, used the Pleiades as a test of vision. The more Pleiades stars one can discern — typically five to nine — the better one’s vision.

“Now we’ve used a system that provides modern astronomy’s sharpest ‘vision’ to solve a longstanding scientific debate about the Pleiades themselves,” said Melis.

Reference: C. Melis, M. J. Reid, A. J. Mioduszewski, J. R. Stauffer, G. C. Bower. A VLBI resolution of the Pleiades distance controversy. Science, 2014; 345 (6200): 1029 DOI: 10.1126/science.1256101

Genomic sequencing reveals mutations, insights into 2014 Ebola outbreak

In response to an ongoing, unprecedented outbreak of Ebola virus disease in West Africa, a team of researchers has rapidly sequenced and analyzed more than 99 Ebola virus genomes. Their findings could have important implications for rapid field diagnostic tests.
Created by CDC microbiologist Frederick A. Murphy, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion. Credit: CDC/Frederick A. Murphy

Created by CDC microbiologist Frederick A. Murphy, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion.
Credit: CDC/Frederick A. Murphy

For the current study, researchers sequenced 99 Ebola virus genomes collected from 78 patients diagnosed with Ebola in Sierra Leone during the first 24 days of the outbreak (a portion of the patients contributed samples more than once, allowing researchers a clearer view into how the virus can change in a single individual over the course of infection). The team found more than 300 genetic changes that make the 2014 Ebola virus genomes distinct from the viral genomes tied to previous Ebola outbreaks. They also found sequence variations indicating that, from the samples sequenced, the EVD outbreak started from a single introduction into humans, subsequently spreading from person to person over many months.

The variations they identified were frequently in regions of the genome encoding proteins. Some of the genetic variation detected in these studies may affect the primers (starting points for DNA synthesis) used in PCR-based diagnostic tests, emphasizing the importance of genomic surveillance and the need for vigilance. To accelerate response efforts, the research team released the full-length sequences on National Center for Biotechnology Information’s (NCBI’s) DNA sequence database in advance of publication, making these data available to the global scientific community.

“By making the data immediately available to the community, we hope to accelerate response efforts,” said co-senior author Pardis Sabeti, a senior associate member at the Broad Institute and an associate professor at Harvard University. “Upon releasing our first batch of Ebola sequences in June, some of the world’s leading epidemic specialists contacted us, and many of them are now also actively working on the data. We were honored and encouraged. A spirit of international and multidisciplinary collaboration is needed to quickly shed light on the ongoing outbreak.”

The 2014 Zaire ebolavirus (EBOV) outbreak is unprecedented both in its size and in its emergence in multiple populated areas. Previous outbreaks had been localized mostly to sparsely populated regions of Middle Africa, with the largest outbreak in 1976 reporting 318 cases. The 2014 outbreak has manifested in the more densely-populated West Africa, and since it was first reported in Guinea in March 2014, 2,240 cases have been reported with 1,229 deaths (as of August 19).

Augustine Goba, Director of the Lassa Laboratory at the Kenema Government Hospital and a co-first author of the paper, identified the first Ebola virus disease case in Sierra Leone using PCR-based diagnostics. “We established surveillance for Ebola well ahead of the disease’s spread into Sierra Leone and began retrospective screening for the disease on samples as far back as January of this year,” said Goba. “This was possible because of our long-standing work to diagnose and study another deadly disease, Lassa fever. We could thus identify cases and trace the Ebola virus spread as soon as it entered our country.”

The research team increased the amount of genomic data available on the Ebola virus by four fold and used the technique of “deep sequencing” on all available samples. Deep sequencing is sequencing done enough times to generate high confidence in the results. In this study, researchers sequenced at a depth of 2,000 times on average for each Ebola genome to get an extremely close-up view of the virus genomes from 78 patients. This high-resolution view allowed the team to detect multiple mutations that alter protein sequences — potential targets for future diagnostics, vaccines, and therapies.

The Ebola strains responsible for the current outbreak likely have a common ancestor, dating back to the very first recorded outbreak in 1976. The researchers also traced the transmission path and evolutionary relationships of the samples, revealing that the lineage responsible for the current outbreak diverged from the Middle African version of the virus within the last ten years and spread from Guinea to Sierra Leone by 12 people who had attended the same funeral.

The team’s catalog of 395 mutations (over 340 that distinguish the current outbreak from previous ones, and over 50 within the West African outbreak) may serve as a starting point for other research groups. “We’ve uncovered more than 300 genetic clues about what sets this outbreak apart from previous outbreaks,” said Stephen Gire, a research scientist in the Sabeti lab at the Broad Institute and Harvard. “Although we don’t know whether these differences are related to the severity of the current outbreak, by sharing these data with the research community, we hope to speed up our understanding of this epidemic and support global efforts to contain it.”

“There is an extraordinary battle still ahead, and we have lost many friends and colleagues already like our good friend and colleague Dr. Humarr Khan, a co-senior author here,” said Sabeti. “By providing this data to the research community immediately and demonstrating that transparency and partnership is one way we hope to honor Humarr’s legacy. We are all in this fight together.”

Reference: Gire, SK, Goba, A et al. Genomic surveillance elucidates Ebola virus origin and transmission during the 2014 outbreak. Science, 2014 DOI: 10.1126/science.1259657

New DNA study unravels the settlement history of the New World Arctic

A new DNA study unravels the settlement history of the New World Arctic. We know people have lived in the New World Arctic for about 5,000 years. Archaeological evidence clearly shows that a variety of cultures survived the harsh climate in Alaska, Canada and Greenland for thousands of years. Despite this, there are several unanswered questions about these people.
Greenlandic Inuit from the 1930s pictured in their traditional boats (umiaq), used for hunting and transportation. Credit: Jette Bang Photos/Arktisk Institut. Copyright: Arktisk Institut

Greenlandic Inuit from the 1930s pictured in their traditional boats (umiaq), used for hunting and transportation. Credit: Jette Bang Photos/Arktisk Institut. Copyright: Arktisk Institut

Looking for ancient human remains in northern Greenland.

The North American Arctic was one of the last major regions to be settled by modern humans. This happened when people crossed the Bering Strait from Siberia and wandered into a new world. While the area has long been well researched by archaeologists, little is known of its genetic prehistory. In this study, researchers show that the Paleo-Eskimo, who lived in the Arctic from about 5,000 years ago until about 700 years ago, represented a distinct wave of migration, separate from both Native Americans — who crossed the Bering Strait much earlier — and the Inuit, who came from Siberia to the Arctic several thousand years after the Paleo-Eskimos.

“Our genetic studies show that, in reality, the Paleo-Eskimos — representing one single group — were the first people in the Arctic, and they survived without outside contact for over 4,000 years,” says Lundbeck Foundation Professor Eske Willerslev from Centre for GeoGenetics at the Natural History Museum, University of Copenhagen, who headed the study.

“Our study also shows that the Paleo-Eskimos, after surviving in near-isolation in the harsh Arctic environment for more than 4,000 years, disappeared around 700 years ago — about the same time when the ancestors of modern-day Inuit spread eastward from Alaska,” adds Dr. Maanasa Raghavan of Centre for GeoGenetics and lead author of the article.

Migration pulses into the Americas

Greenlandic Inuit from the 1930s pictured in their traditional boats (umiaq), used for hunting and transportation.

In the archaeological literature, distinctions are drawn between the different cultural units in the Arctic in the period up to the rise of the Thule culture, which replaced all previous Arctic cultures and is the source of today’s Inuit in Alaska, Canada and Greenland. The earlier cultures included the Saqqaq or Pre-Dorset and Dorset, comprising the Paleo-Eskimo tradition, with the Dorset being further divided into three phases. All of these had distinctive cultural, lifestyle and subsistence traits as seen in the archaeological record. There were also several periods during which the Arctic was devoid of human settlement. These facts have further raised questions regarding the possibility of several waves of migration from Siberia to Alaska, or perhaps Native Americans migrating north during the first 4,000 years of the Arctic being inhabited.

“Our study shows that, genetically, all of the different Paleo-Eskimo cultures belonged to the same group of people. On the other hand, they are not closely related to the Thule culture, and we see no indication of assimilation between the two groups. We have also ascertained that the Paleo-Eskimos were not descendants of the Native Americans. The genetics reveals that there must have been at least three separate pulses of migration from Siberia into the Americas and the Arctic. First came the ancestors of today’s Native Americans, then came the Paleo-Eskimos, and finally the ancestors of today’s Inuit,” says Eske Willerslev.

Genetics and archaeology

The genetic study underpins some archaeological findings, but not all of them.

It rejects the speculation that the Paleo-Eskimos represented several different peoples, including Native Americans, or that they are direct ancestors of today’s Inuit. Also rejected are the theories that the Greenlanders on the east coast or the Canadian Sadlermiut, from Southampton Island in Hudson Bay, who died out as late as 1902-03, were surviving groups of Dorset people. Genetics shows that these groups were Inuit who had developed Dorset-like cultural traits.

The study clearly shows that the diversity of tools and ways of life over time, which in archaeology is often interpreted as a result of migration, does not in fact necessarily reflect influx of new people. The Paleo-Eskimos lived in near-isolation for more than 4,000 years, and during this time their culture developed in such diverse ways that it has led some to interpret them as different peoples.

“Essentially, we have two consecutive waves of genetically distinct groups entering the New World Arctic and giving rise to three discrete cultural units. Through this study, we are able to address the question of cultural versus genetic continuity in one of the most challenging environments that modern humans have successfully settled, and present a comprehensive picture of how the Arctic was peopled,” says Dr. Raghavan.

The first inhabitants

The study was unable to establish why the disappearance of the Paleo-Eskimos coincided with the ancestors of the Inuit beginning to colonise the Arctic. There is no doubt that the Inuit ancestors — who crossed the Bering Strait about 1,000 years ago and reached Greenland around 700 years ago — were technologically superior.

The Inuit’s own myths tell stories of a people before them, which in all likelihood refer to the Paleo-Eskimos. In the myths, they are referred to as the ‘Tunit’ or ‘Sivullirmiut’, which means “the first inhabitants.” According to these myths they were giants, who were taller and stronger than the Inuit, but easily frightened from their settlements by the newcomers.

Co-author Dr. William Fitzhugh from the Arctic Studies Centre at the Smithsonian Institution says: “Ever since the discovery of a Paleo-Eskimo culture in the North American Arctic in 1925, archaeologists have been mystified by their relationship with the Thule culture ancestors of the modern Inuit. Paleo-Eskimo culture was replaced rapidly around AD 1300-1400, their only traces being references to ‘Tunit’ in Inuit mythology and adoption of some elements of Dorset technology. This new genomic research settles outstanding issues in Arctic archaeology that have been debated for nearly a century, finding that Paleo-Eskimo and Neo-Eskimo people were genetically distinct, with separate origins in Eastern Siberia, and the Paleo-Eskimo remained isolated in the Eastern Arctic for thousands of years with no significant mixing with each other or with American Indians, Norse, or other Europeans.”

Reference: M. Raghavan, M. DeGiorgio, A. Albrechtsen, I. Moltke, P. Skoglund, T. S. Korneliussen, B. Gronnow, M. Appelt, H. C. Gullov, T. M. Friesen, W. Fitzhugh, H. Malmstrom, S. Rasmussen, J. Olsen, L. Melchior, B. T. Fuller, S. M. Fahrni, T. Stafford, V. Grimes, M. A. P. Renouf, J. Cybulski, N. Lynnerup, M. M. Lahr, K. Britton, R. Knecht, J. Arneborg, M. Metspalu, O. E. Cornejo, A.-S. Malaspinas, Y. Wang, M. Rasmussen, V. Raghavan, T. V. O. Hansen, E. Khusnutdinova, T. Pierre, K. Dneprovsky, C. Andreasen, H. Lange, M. G. Hayes, J. Coltrain, V. A. Spitsyn, A. Gotherstrom, L. Orlando, T. Kivisild, R. Villems, M. H. Crawford, F. C. Nielsen, J. Dissing, J. Heinemeier, M. Meldgaard, C. Bustamante, D. H. O’Rourke, M. Jakobsson, M. T. P. Gilbert, R. Nielsen, E. Willerslev. The genetic prehistory of the New World Arctic. Science, 2014; 345 (6200): 1255832 DOI: 10.1126/science.1255832


Detecting neutrinos, physicists look into the heart of the sun

Using one of the most sensitive neutrino detectors on the planet, physicists have directly detected neutrinos created by the ‘keystone’ proton-proton fusion process going on at the sun’s core for the first time.
Scientists report for the first time they have directly detected neutrinos created by the "keystone" proton-proton (pp) fusion process going on at the sun's core. Credit: NASA/SDO

Scientists report for the first time they have directly detected neutrinos created by the “keystone” proton-proton (pp) fusion process going on at the sun’s core.
Credit: NASA/SDO

The pp reaction is the first step of a reaction sequence responsible for about 99 percent of the Sun’s power, Pocar explains. Solar neutrinos are produced in nuclear processes and radioactive decays of different elements during fusion reactions at the Sun’s core. These particles stream out of the star at nearly the speed of light, as many as 420 billion hitting every square inch of the Earth’s surface per second.

Because they only interact through the nuclear weak force, they pass through matter virtually unaffected, which makes them very difficult to detect and distinguish from trace nuclear decays of ordinary materials, he adds.

The UMass Amherst physicist, one principal investigator on a team of more than 100 scientists, says, “With these latest neutrino data, we are directly looking at the originator of the sun’s biggest energy producing process, or chain of reactions, going on in its extremely hot, dense core. While the light we see from the Sun in our daily life reaches us in about eight minutes, it takes tens of thousands of years for energy radiating from the sun’s center to be emitted as light.”

“By comparing the two different types of solar energy radiated, as neutrinos and as surface light, we obtain experimental information about the Sun’s thermodynamic equilibrium over about a 100,000-year timescale,” Pocar adds. “If the eyes are the mirror of the soul, with these neutrinos, we are looking not just at its face, but directly into its core. We have glimpsed the sun’s soul.”

“As far as we know, neutrinos are the only way we have of looking into the Sun’s interior. These pp neutrinos, emitted when two protons fuse forming a deuteron, are particularly hard to study. This is because they are low energy, in the range where natural radioactivity is very abundant and masks the signal from their interaction.”

The Borexino instrument, located deep beneath Italy’s Apennine Mountains, detects neutrinos as they interact with the electrons of an ultra-pure organic liquid scintillator at the center of a large sphere surrounded by 1,000 tons of water. Its great depth and many onion-like protective layers maintain the core as the most radiation-free medium on the planet.

Indeed, it is the only detector on Earth capable of observing the entire spectrum of solar neutrino simultaneously. Neutrinos come in three types, or “flavors.” Those from the Sun’s core are of the “electron” flavor, and as they travel away from their birthplace, they oscillate or change between two other flavors, “muon” to “tau.” With this and previous solar neutrino measurements, the Borexino experiment has strongly confirmed this behavior of the elusive particles, Pocar says.

One of the crucial challenges in using Borexino is the need to control and precisely quantify all background radiation. Pocar says the organic scintillator at Borexino’s center is filled with a benzene-like liquid derived from “really, really old, millions-of-years-old petroleum,” among the oldest they could find on Earth.

“We needed this because we want all the Carbon-14 to have decayed, or as much of it as possible, because carbon-14 beta decays cover the neutrino signals we want to detect. We know there is only three atoms of C14 for each billion, billion atoms in the scintillator, which shows how ridiculously clean it is.”

A related problem the physicists discuss in their new paper is that when two C14 atoms in the scintillator decay simultaneously, an event they call a “pileup,” its signature is similar to that of a pp solar neutrino interaction. In a great advance for the analysis, Pocar says, “Keith Otis figured out a way to solve the problem of statistically identifying and subtracting these pileup events from the data, which basically makes this new pp neutrino analysis process possible.”

Though detecting pp neutrinos was not part of the original National Science Foundation-sponsored Borexino experiment, “it’s a little bit of a coup that we could do it,” the astrophysicist says. “We pushed the detector sensitivity to a limit that has never been achieved before.”

Reference: G. Bellini, J. Benziger, D. Bick, G. Bonfini, D. Bravo, B. Caccianiga, L. Cadonati, F. Calaprice, A. Caminata, P. Cavalcante, A. Chavarria, A. Chepurnov, D. D’Angelo, S. Davini, A. Derbin, A. Empl, A. Etenko, K. Fomenko, D. Franco, F. Gabriele, C. Galbiati, S. Gazzana, C. Ghiano, M. Giammarchi, M. Göger-Neff, A. Goretti, M. Gromov, C. Hagner, E. Hungerford, Aldo Ianni, Andrea Ianni, V. Kobychev, D. Korablev, G. Korga, D. Kryn, M. Laubenstein, B. Lehnert, T. Lewke, E. Litvinovich, F. Lombardi, P. Lombardi, L. Ludhova, G. Lukyanchenko, I. Machulin, S. Manecki, W. Maneschg, S. Marcocci, Q. Meindl, E. Meroni, M. Meyer, L. Miramonti, M. Misiaszek, M. Montuschi, P. Mosteiro, V. Muratova, L. Oberauer, M. Obolensky, F. Ortica, K. Otis, M. Pallavicini, L. Papp, L. Perasso, A. Pocar, G. Ranucci, A. Razeto, A. Re, A. Romani, N. Rossi, R. Saldanha, C. Salvo, S. Schönert, H. Simgen, M. Skorokhvatov, O. Smirnov, A. Sotnikov, S. Sukhotin, Y. Suvorov, R. Tartaglia, G. Testera, D. Vignaud, R. B. Vogelaar, F. von Feilitzsch, H. Wang, J. Winter, M. Wojcik, A. Wright, M. Wurm, O. Zaimidoroga, S. Zavatarelli, K. Zuber, G. Zuzel. Neutrinos from the primary proton–proton fusion process in the Sun. Nature, 2014; 512 (7515): 383 DOI: 10.1038/nature13702

Early growth of giant galaxy, just 3 billion years after the Big Bang, revealed

The birth of massive galaxies, according to galaxy formation theories, begins with the buildup of a dense, compact core that is ablaze with the glow of millions of newly formed stars. Evidence of this early construction phase, however, has eluded astronomers — until now. Astronomers identified a dense galactic core, dubbed “Sparky,” using a combination of data from several space telescopes. Hubble photographed the emerging galaxy as it looked 11 billion years ago, just 3 billion years after the birth of our universe in the big bang.
This illustration reveals the celestial fireworks deep inside the crowded core of a developing galaxy, as seen from a hypothetical planetary system. The sky is ablaze with the glow from nebulae, fledgling star clusters, and stars exploding as supernovae. The rapidly forming core may eventually become the heart of a mammoth galaxy similar to one of the giant elliptical galaxies seen today. Credit: NASA, ESA, and Z. Levay and G. Bacon (STScI)

This illustration reveals the celestial fireworks deep inside the crowded core of a developing galaxy, as seen from a hypothetical planetary system. The sky is ablaze with the glow from nebulae, fledgling star clusters, and stars exploding as supernovae. The rapidly forming core may eventually become the heart of a mammoth galaxy similar to one of the giant elliptical galaxies seen today.
Credit: NASA, ESA, and Z. Levay and G. Bacon (STScI)

Because the infant galaxy is so far away, it is seen as it appeared 11 billion years ago, just 3 billion years after the birth of the universe in the big bang. Astronomers think the compact galaxy will continue to grow, possibly becoming a giant elliptical galaxy, a gas-deficient assemblage of ancient stars theorized to develop from the inside out, with a compact core marking its beginnings.

“We really hadn’t seen a formation process that could create things that are this dense,” explained Erica Nelson of Yale University in New Haven, Connecticut, lead author of the science paper announcing the results. “We suspect that this core-formation process is a phenomenon unique to the early universe because the early universe, as a whole, was more compact. Today, the universe is so diffuse that it cannot create such objects anymore.”

The research team’s paper appears in the August 27 issue of the journal Nature.

Although only a fraction of the size of the Milky Way, the tiny powerhouse galaxy already contains about twice as many stars as our galaxy, all crammed into a region only 6,000 light-years across. The Milky Way is about 100,000 light-years across. The barely visible galaxy may be representative of a much larger population of similar objects that are obscured by dust.

“They’re very extreme environments,” Nelson said. “It’s like a medieval cauldron forging stars. There’s a lot of turbulence, and it’s bubbling. If you were in there, the night sky would be bright with young stars, and there would be a lot of dust, gas, and remnants of exploding stars. To actually see this happening is fascinating.”

Alongside determining the galaxy’s size from the Hubble images, the team dug into archival far-infrared images from the Spitzer and Herschel telescopes. The analysis allowed them to see how fast the young galaxy is churning out stars. Sparky is producing roughly 300 stars per year. By comparison, the Milky Way produces roughly 10 stars per year.

Astronomers believe that this frenzied star formation occurred because the galactic center is forming deep inside a gravitational well of dark matter, an invisible form of matter that makes up the scaffolding upon which galaxies formed in the early universe. A torrent of gas is flowing into this well at the galaxy’s core, sparking waves of star birth.

The sheer amount of gas and dust within an extreme star-forming region like this may explain why these compact galaxies have eluded astronomers until now. Bursts of star formation create dust, which builds up within the forming galaxy and can block some starlight. Sparky was only barely visible, and it required the infrared capabilities of Hubble’s Wide Field Camera 3, Spitzer, and Herschel to reveal the developing galaxy.

The observations indicate that the galaxy had been furiously making stars for more than a billion years (at the time the light we now observe began its long journey). But the galaxy didn’t keep up this frenetic pace for very long, the researchers suggested. Eventually, the galaxy probably stopped forming stars in the packed core. Smaller galaxies then might have merged with the growing galaxy, making it expand outward in size over the next 10 billion years, possibly becoming similar to one of the mammoth, sedate elliptical galaxies seen today.

“I think our discovery settles the question of whether this mode of building galaxies actually happened or not,” said team member Pieter van Dokkum of Yale University. “The question now is, how often did this occur? We suspect there are other galaxies like this that are even fainter in near-infrared wavelengths. We think they’ll be brighter at longer wavelengths, and so it will really be up to future infrared telescopes such as NASA’s James Webb Space Telescope to find more of these objects.”

Reference: Erica Nelson, Pieter van Dokkum, Marijn Franx, Gabriel Brammer, Ivelina Momcheva, Natascha Förster Schreiber, Elisabete da Cunha, Linda Tacconi, Rachel Bezanson, Allison Kirkpatrick, Joel Leja, Hans-Walter Rix, Rosalind Skelton, Arjen van der Wel, Katherine Whitaker, Stijn Wuyts. A massive galaxy in its core formation phase three billion years after the Big Bang. Nature, 2014; DOI: 10.1038/nature13616

Marijuana compound may offer treatment for Alzheimer’s disease, study suggests

Extremely low levels of the compound in marijuana known as delta-9-tetrahydrocannabinol, or THC, may slow or halt the progression of Alzheimer’s disease, a recent study from neuroscientists suggests.
A new study tested the effects of the marijuana compound THC on an Alzheimer’s disease cell model. Credit: © Uros Poteko

A new study tested the effects of the marijuana compound THC on an Alzheimer’s disease cell model.
Credit: © Uros Poteko

Researchers from the USF Health Byrd Alzheimer’s Institute showed that extremely low doses of THC reduce the production of amyloid beta, found in a soluble form in most aging brains, and prevent abnormal accumulation of this protein — a process considered one of the pathological hallmarks evident early in the memory-robbing disease. These low concentrations of THC also selectively enhanced mitochondrial function, which is needed to help supply energy, transmit signals, and maintain a healthy brain.

“THC is known to be a potent antioxidant with neuroprotective properties, but this is the first report that the compound directly affects Alzheimer’s pathology by decreasing amyloid beta levels, inhibiting its aggregation, and enhancing mitochondrial function,” said study lead author Chuanhai Cao, PhD and a neuroscientist at the Byrd Alzheimer’s Institute and the USF College of Pharmacy.

“Decreased levels of amyloid beta means less aggregation, which may protect against the progression of Alzheimer’s disease. Since THC is a natural and relatively safe amyloid inhibitor, THC or its analogs may help us develop an effective treatment in the future.”

The researchers point out that at the low doses studied, the therapeutic benefits of THC appear to prevail over the associated risks of THC toxicity and memory impairment.

Neel Nabar, a study co-author and MD/PhD candidate, recognized the rapidly changing political climate surrounding the debate over medical marijuana.

“While we are still far from a consensus, this study indicates that THC and THC-related compounds may be of therapeutic value in Alzheimer’s disease,” Nabar said. “Are we advocating that people use illicit drugs to prevent the disease? No. It’s important to keep in mind that just because a drug may be effective doesn’t mean it can be safely used by anyone. However, these findings may lead to the development of related compounds that are safe, legal, and useful in the treatment of Alzheimer’s disease.”

The body’s own system of cannabinoid receptors interacts with naturally-occurring cannabinoid molecules, and these molecules function similarly to the THC isolated from the cannabis (marijuana) plant.

Dr. Cao’s laboratory at the Byrd Alzheimer’s Institute is currently investigating the effects of a drug cocktail that includes THC, caffeine as well as other natural compounds in a cellular model of Alzheimer’s disease, and will advance to a genetically-engineered mouse model of Alzheimer’s shortly.

“The dose and target population are critically important for any drug, so careful monitoring and control of drug levels in the blood and system are very important for therapeutic use, especially for a compound such as THC,” Dr. Cao said.

Reference: Chuanhai Cao et al. The Potential Therapeutic Effects of THC on Alzheimer’s Disease. Journal of Alzheimer’s Disease, August 2014 DOI: 10.3233/JAD-140093


Get every new post delivered to your Inbox.

Join 705 other followers