The dinosaur they could not kill: Brontosaurus is back

It would be pretty safe to say that everyone has heard of Brontosaurus, but in the 1970s the genus vanished from the palaeobiology lexicon. The ‘Bone Wars’ of post-Civil War US palaeontology stemmed from the astonishing prices that dinosaur skeletons fetched. The frenzy of competition to fill museums unearthed hundreds of specimens, but the financial enthusiasm did not extend to painstaking anatomy. Finding a new genus meant further profit so a slapdash approach to taxonomy might pay well. So it did with the dinosaur family Diplodocidae for Othniel Marsh, one of the fossil marauders. He along with his main competitor, Edward Cope, was a wizard fossicker, but lacked incentive to properly describe what he unearthed. In 1877 Marsh published a brief note about a new genus that he called Apatosaurus, then hurried off to for more booty. Two years later he returned from the field with another monster reptile, and casually made a brief case for the ‘Thunder Lizard’, Brontosaurus. Unlike his usage of ‘Deceptive Lizard’ for Apatosaurus, the English translation of Brontosaurus caught the public imagination and lingers to this day as the archetype for a mighty yet gentle, extinct beast. Yet, professional palaeontologists were soon onto the lax ways of Marsh and Cope, and by 1903 deemed Brontosaurus to be taxonomically indistinguishable from Apatosaurus, and as far as science was concerned the ‘Thunder Lizard’ was no more.

Illustration of a Brontosaurus (nowadays calle...

Artist’s impression of a Brontosaurus . The idea that it was wholly or mostly aquatic is now considered outdated. (credit: Wikipedia)

But, the legacy of frenzied fossil collecting of a century or more ago is huge collections that never made it to display, which form rich pickings for latter-day palaeontologists with all kinds of anatomical tools now at their disposal: the stuff of almost endless graduate studies. Emanuel Tschopp of the New University of Lisbon with colleagues took up the challenge of the Diplodocidae by examining 49 named specimens and 32 from closely related specimens as controls, measuring up to 477 skeletal features (Tschopp, E. et al. 2015. A specimen-level phylogenetic analysis and taxonomic revision of Diplodocidae (Dinosauria, Sauropoda). PeerJ, v. 3, doi10.771/peerj.857). An unintended consequence was their discovery that 6 specimens of what had become Apatosaurus excelsus (formerly Marsh’s Brontosaurus) differed from all other members of its genus in 12 or more key characteristics. It seems to taxonomists a little unfair that Brontosaurus should not be resurrected, and that looks likely.

Had this been about almost any other group of fossils, with the exception perhaps of the ever-popular tyrannosaurs, the lengthy paper would have passed unnoticed except by specialist palaeontologists. In a little over a week the open-access publication had more than 17 thousand views and 3300 copies were downloaded.

See also: Balter, M. 2015. Bully for Brontosaurus. Science, v. 348, p. 168

Magma rushed into largest layered intrusion

Chances are that the platinum in the catalytic converter that helps prevent your car emitting toxic gases in its exhaust fumes came from a vast igneous intrusion in South Africa known as the Bushveldt complex. The world’s most important source of noble metals formed by repeated differentiation of huge volumes of mafic magma to form thin, dense layers rich in sulfides, platinum group metals and chromium ore set in very thick layers of barren gabbro and other mafic to ultramafic rock. The intrusion is exposed over an area the size of Ireland and formed about 2 billion years ago. Its 370 000 to 600 000 km3 volume suggests that it was the magma chamber that fed flood basalts that erosion has since eroded away. Successive pulses of basaltic magma built up a total thickness of about 8 kilometres of layered rock.

English: black Chromitite and grey anorthosite...

Layered igneous rocks in the Bushveld Complex (credit: Wikipedia)

The final product of the Bushveldt differentiation process was minute pockets of material of more felsic composition trapped within overwhelmingly larger amounts of gabbro. One of the elements that ended up in these roughly granitic inclusions was zirconium that mafic minerals are unable to accommodate while basaltic magma is crystallising. That formed minute crystals of the mineral zircon (ZrSiO4) in the residual pockets, which in turn locked up a variety of other elements, including uranium. Zircon can be dated using uranium’s radioactive decay to form lead isotopes, its refusal to enter chemical reactions after its crystallisation makes U/Pb dates of zircon among the most reliable available for geochronology and the precision of such dates has become increasingly exquisite as mass spectrometry has improved. So, the Bushveldt complex now has among the best records of magma chamber evolution (Zeh, A. et al. 2015. The Bushveld Complex was emplaced and cooled in less than one million years – results of zirconology, and geotectonic implications. Earth and Planetary Science Letters, v. 418, p. 103-114).

Like a number of younger large igneous provinces, the Bushveldt complex took a very short time to form, about 950 thousand years at 2055 Ma ago. That is from magma emplacement to final crystallization when the zircon ages were set, so the accumulation of magma probably took only 100 thousand years. This suggests that magma blurted into the lower crust at an average rate of around 5 cubic kilometers per year, and quite probably even faster if the magmatism was episodic. It requires a major stretch of the imagination to suggest that this could have occurred by some passive process. Instead, the authors have suggested that while a plume of mantle material rose from well below the lithosphere a large slab of lower lithosphere, formed from dense eclogite, broke off and literally fell into the deeper mantle. The resulting changes in stress in the lower lithosphere would have acted as a pump to drive the plume upwards, causing it to melt as pressure dropped, and to squirt magma into the overlying continental crust. Although the authors do not mention it, this is reminiscent of the idea of large igneous provinces having sufficient power to eject large masses from the Earth’s surface: the Verneshot theory, recently exhumed in late 2014. The main difference is that the originators of the Verneshot theory appealed to explosive gas release.

A new explanation for banded iron formations (BIFs)

The main source for iron and steel has for more than half a century been Precambrian rock characterised by intricate interlayering of silica- and iron oxide-rich sediments known as banded iron formations or BIFs. They always appear in what were shallow-water parts of Precambrian sedimentary basins. Although much the same kind of material turns up in sequences from 3.8 to 0.6 Ga, by far the largest accumulations date from 2.6 to 1.8 Ga, epitomised by the vast BIFs of the Palaeoproterozoic Hamersley Basin in Western Australia. This peak of iron-ore deposition brackets the time (~2.4 Ga) when world-wide evidence suggests that the Earth’s atmosphere first acquired tangible amounts of free oxygen: the so-called ‘Great Oxidation Event’. Yet the preservation of such enormous amounts of oxidised iron compounds in BIFs is paradoxical for two reasons: the amount of freely available atmospheric oxygen at their acme was far lower than today; had the oceans contained much oxygen, dissolved ions of reduced Fe-2 would not have been able to pervade seawater as they had to for BIFs to have accumulated in shallow water. Iron-rich ocean water demands that its chemical state was highly reducing.

Oblique view of an open pit mine in banded iron formation at Mount Tom Price, Hamersley region Western Australia (Credit Google earth)

Oblique view of an open pit mine in banded iron formation at Mount Tom Price, Hamersley region Western Australia (Credit Google earth)

The paradox of highly oxidised sediments being deposited when oceans were highly reduced was resolved, or seemed to have been, in the late 20th century. It involved a hypothesis that reduced, Fe-rich water entered shallow, restricted basins where photosynthetic organisms – probably cyanobacteria – produced localised enrichments in dissolved oxygen so that the iron precipitated to form BIFs. Later work revealed oddities that seemed to suggest some direct role for the organisms themselves, a contradictory role for the co-dominant silica-rich cherty layers and even that another kind of bacteria that does not produce oxygen directly may have deposited oxidised iron minerals. Much of the research focussed on the Hamersley BIF deposits, and it comes as no surprise that another twist in the BIF saga has recently emerged from the same, enormous repository of evidence (Rasmussen, B. et al. 2015. Precipitation of iron silicate nanoparticles in early Precambrian oceans marks Earth’s first iron age. Geology, v. 43, p. 303-306).

The cherty laminations have received a great deal less attention than the iron oxides. It turns out that they are heaving with minute particles of iron silicate. These are mainly the minerals stilpnomelane [K(Fe,Mg)8(Si, Al)12(O, OH)27] and greenalite [(Fe)2–3Si2O5(OH)4] that account for up to 10% of the chert. They suggest that ferruginous, silica-enriched seawater continually precipitated a mixture of iron silicate and silica, with cyclical increases in the amount of iron-silicate. Being such a tiny size the nanoparticles would have had a very high surface area relative to their mass and would therefore have been highly reactive. The authors suggest that the present mineralogy of BIFs, which includes iron carbonates and, in some cases, sulfides as well as oxides may have resulted from post-depositional mineral reactions. Much the same features occur in 3.46 Ga Archaean BIFs at Marble Bar in Western Australia that are almost a billion years older that the Hamersley deposits, suggesting that a direct biological role in BIF formation may not have been necessary.

Anthropocene: what (or who) is it for?

The made-up word chrononymy could be applied to the study of the names of geological divisions and their places on the International Stratigraphic Chart. Until 2008 that was something of a slow-burner, as careers go. It all began with Giovanni Arduino and Johann Gotlob Lehman in the mid- to late 18th century, during the informal historic episode known as the Enlightenment. To them we owe the first statements of stratigraphic principles and the beginning of stratigraphic divisions: rocks divided into the major segments of Primitive, Secondary, Tertiary and Quaternary (Arduino). Thus stratigraphy seeks to set up a fundamental scale or chart for expressing Earth’s history as revealed by rocks. The first two divisions bit the dust long ago; Tertiary is now an informal synonym for the Cenozoic Era; only Quaternary clings on as the embattled Period at the end of the Cenozoic.  All 11 Systems/Periods of the Phanerozoic, their 37 Series/Epochs and 85 Stages/Ages in the latest version of the International Stratigraphic Chart have been thrashed out since then, much being accomplished in the late 19th and early 20th centuries. Curiously, the world body responsible for sharpening up the definition of this system of ‘chrononymy’, the International Commission on Stratigraphy (ICS), seems not to have seen fit to record the history of stratigraphy: a great mystery. Without it geologists would be unable to converse with one another and the world at large.

Yet now an increasing number of scientists are seriously proposing a new entry at the 4th level of division after Eon, Era and Period: a new Epoch that acknowledges the huge global impact of human activity on atmosphere, hydrosphere, biosphere and even lithosphere. They want it to be called the Anthropocene, and for some its eventual acceptance ought to relegate the current Holocene Epoch, in which humans invented agriculture, a form of economic intercourse and exchange known as capital and all the trappings of modern industry, to the 5th division or Stage. Earth-pages has been muttering about the Anthropocene for the past decade, as charted in a number of the links above, so if you want to know which way its author is leaning and how he came to find the proposal an unnecessary irritation, have a look at them. Last week things became sufficiently serious for another comment. Simon Lewis and Mark Maslin of the Department of Geography at University College London have summarised the scientific grounds alleged to justify an Anthropocene Epoch and its strict definition in a Nature Perspective (Lewis, S.J. & Maslin, M.A. 2015. Defining the Anthropocene. Nature, v. 519, p. 171-180).-=, which is interestingly discussed in the same Issue by Richard Monastersky.

Lewis and Maslin present two dates that their arguments and accepted stratigraphic protocols suggest as candidates for the start of the Anthropocene: 1610 and 1964 CE, both of which relate to features that are expressed by geological records that should last indefinitely. The first is a decline and eventual recovery in the atmospheric CO2 level recorded in high-resolution Antarctic ice core records between 1570 and 1620 CE that can be ascribed to the decline in the population of the Americas’ native peoples from an estimated 60 to 6 million. This result of the impact of European first colonisation – disease, slaughter, enslavement and famine – reduced agriculture and fire use and saw the regeneration of 5 x 107 hectares of forest, which drew down CO2 globally. It also coincides with the coolest part of the Little Ice Age from 1594-1677 CE. They caution against the start of the Industrial Revolution as an alternative for a ‘Golden Spike’ since it was a diachronous event, beginning in Europe. Instead, they show that the second proposal for a start in 1964 has a good basis in the record of global anthropogenic effects on the Earth marked by the peak fallout of radioactive isotopes generated by atomic weapons tests during the Cold War, principally 14C with a 5730 year half life, together with others more long-lived. The year 1964 is also roughly when growth in all aspects of human activity really took off, which some dub in a slightly Tolkienesque manner the ‘Great Acceleration’. [There is a growing taste for this kind of hyperbole, e.g. the ‘Great Oxygenation Event’ around 2.4 Ga and the ‘Great Dying’ for the end-Permian mass extinction]. Yet they neglect to note that the geochronological origin point for times past has been defined as 1950 CE when nucleogenic 14C contaminated later materials as regards radiocarbon dating, which had just become feasible.   Lewis and Maslin conclude their Perspective as follows:

To a large extent the future of the only place where life is known to exist is being determined by the actions of humans. Yet, the power that humans wield is unlike any other force of nature, because it is reflexive and therefore can be used, withdrawn or modified. More widespread recognition that human actions are driving far-reaching changes to the life-supporting infrastructure of Earth may well have increasing philosophical, social, economic and political implications over the coming decades.

So the Anthropocene adds the future to the stratigraphic column, which seems more than slightly odd. As Richard Monastersky notes, it is in fact a political entity: part of some kind of agenda or manifesto; a sort of environmental agitprop from the ‘geos’. As if there were not dozens of rational reasons to change human impacts to haul society back from catastrophe, which many people outside the scientific community have good reason to see as  hot air on which there is never any concrete action by ‘the great and the good’. Monastersky also notes that the present Anthropocene record in naturally deposited geological materials accounts for less than a millimetre at the top of ocean-floor sediments. How long might the proposed Epoch last? If action to halt anthropogenic environmental change does eventually work, the Anthropocene will be  very short in historic terms let alone those which form the currency of geology. If it doesn’t, there will be nobody around able to document, let alone understand, the epochal events recorded in rocks. At its worst, for some alien, visiting planetary scientists, far in the future, an Anthropocene Epoch will almost certainly be far shorter than the 104 to 105 years represented by the hugely more important Palaeozoic-Mesozoic and Mesozoic-Cenozoic boundary sequences; but with no Wikipedia entry.

Not everybody gets a vote on these kinds of thing, such is the way that science is administered, but all is not lost. The final arbiter is the Executive Committee of the International Union of Geological Sciences (IUGS), but first the Anthropocene’s status as a new Epoch has to be approved by 60% of the ICS Subcommission on Quaternary Stratigraphy, if put to a vote. Then such a ‘supermajority’ would be needed from the chairs of all 16 of the ICS subcommissions that study Earth’s major time divisions. But first, the 37 members of the Subcommission on Quaternary Stratigraphy’s ‘Anthropocene’ working group have to decide whether or not to submit a proposal: things may drag on at an appropriately stratigraphic pace. Yet the real point is that the effect of human activity on Earth-system processes has been documented and discussed at length. I’ll give Marx the last word in this ‘The philosophers have only interpreted the world, in various ways. The point, however, is to change it’. A new stratigraphic Epoch doesn’t really seem to measure up to that…

Genus Homo pushed back nearly half a million years

Bill Deller, a friend whose Sunday is partly spent reading the Observer and Sunday Times from cover to cover, alerted me to a lengthy article by Britain’s doyen of paleoanthropologists Chris Stringer of the Natural History Museum. (Stringer, C. 2015. First human? The jawbone that makes us question where we’re from. Observer, 8 March 2015, p. 36). His piece sprang from two Reports published online in Science that describe about 1/3 of a hominin lower jaw unearthed – where else? – in the Afar Depression of Ethiopia. The discovery site of Ledi-Geraru is a mere 30 km from the most hominin-productive ground in Africa: Hadar and Dikika for Australopithecus afarensis (‘Lucy’ at 3.2 Ma and ‘Selam’ at 3.3 Ma, respectively); Gona for the earliest-known stone tools (2.6 Ma); and the previously earliest member of the genus Homo, also close to Hadar.

On some small objects mighty tales are hung, and the Ledi-Geraru jawbone and 6 teeth is one of them. It has features intermediate between Australopithecus and Homo, but more important is its age: Pliocene, around 2.8 to 2.75 Ma (Villmoare, B. And 8 others. Early Homo at 2.8 Ma from Ledi Geraru, Afar, Ethiopia. Science Express doi: 10.1126/science.aaa1343). The sediments from which Ethiopian geologist Chalachew Seyoum, studying at Arizona State University, extracted the jawbone formed in a river floodplain. Other fossils suggest open grassland rich with game, similar to that of the Serengeti in Tanzania, with tree-lined river courses. These were laid down at a time of climatic transition from humid to more arid conditions, that several authors have suggested to have provided the environmental stresses that drove evolutionary change, including that of hominins (DiMaggio, E.N. and 10 others 2015. Late Pliocene fossiliferous sedimentary record and the environmental context of early Homo from Afar, Ethiopia. Science Express doi: 10.1126/science.aaa1415).

Designating the jawbone as evidence for the earliest known member of our genus rests almost entirely on the teeth, and so is at best tentative awaiting further fossil material. The greatest complicating factor is that the earliest supposed fossils of Homo (i.e. H. habilis, H rudolfensis and others yet to be assigned a species identity) are a morphologically more mixed bunch than those younger than 2 Ma, such as H. ergaster and H. erectus. Indeed, every one of them has some significant peculiarity. That diversity even extends to the earliest humans to have left Africa, found in 1.8 Ma old sediments at Dmanisi in Georgia (Homo georgicus), where each of the 5 well-preserved skulls is unique.  The Dmanisi hominins have been likened to the type specimen of H. habilis, but such is the diversity of both that is probably a shot in the dark.

English: Cast replica of OH 7, the type specim...

Replica of OH 7, the deformed type specimen of Homo habilis. (credit: Wikipedia)

Coinciding with the new Ethiopian hominin papers a study was published in Nature the same week that describes how the type specimen of H. habilis (found, in close association with crude stone tools and cut bones, by Mary and Lewis Leakey at Olduvai Gorge, Tanzania in 1960) has been digitally restored from its somewhat deformed state when found (Spoor, F. et al. 2015. Reconstructed Homo habilis type OH 7 suggests deep-rooted species diversity in early Homo. Nature, v. 519, p. 83-86, doi:10.1038/nature14224). The restored lower jaw and teeth, and part of its cranium, deepened the mysterious diversity of the group of fossils for which it is the type specimen, but boosts its standing as regards probable brain size from one within the range of australopithecines to significantly larger –~750 ml compared with <600 ml – about half that of modern humans. The habilis diversity is largely to do with jaws and teeth: it is the estimated brain size as well as the type specimen’s association with tools and their use that elevates them all to human status. Yet, the reconstruction is said by some to raise the issue of a mosaic of early human species. The alternative is an unusual degree of shape diversity (polymorphism) among a single emerging species, which is not much favoured these days. An issue to consider is: what constitutes a species? For living organisms morphological similarity has to be set against the ability for fertile interbreeding. Small, geographically isolated populations of a single species often diverge markedly in terms of what they look like yet continue to be interfertile, the opposite being convergence in form by organisms that are completely unrelated.

Palaeontologists tend to go largely with division on grounds of form, so that when a specimen falls outside some agreed morphological statistics, it crosses a species boundary. Set against that the incontrovertible evidence that at least 3 recent human species interbred successfully to leave the mark in all non-African living humans. What if the first humans emerging from, probably, a well-defined population of australopithecines continued to interbreed with them, right up to the point when they became extinct about 2 Ma ago?

On a more concrete note, the Ledi Geraru hominin is a good candidate for the maker of the first stone tools found ‘just down the road’ at Gona!

Wet spells in Arabia and human migration

In September 2014, Earth Pages  reported how remote sensing had revealed clear signs of extensive fossil drainage systems and lakes at the heart of the Arabian Peninsula, now the hyper-arid Empty Quarter (Rub al Khali). Their association with human stone artifacts dated as far back as 211 ka, those with affinities to collections from East Africa clustering between 74-90 ka, supported the sub-continent possibly having been an early staging post for fully modern human migrants from Africa. Member of the same archaeological team based at Oxford University have now published late Pleistocene palaeoclimatic records from alluvial-fan sediments in the eastern United Arab Emirates that add detail to this hypothesis (Parton, A. ­et al. 2015. Alluvial fan records from southeast Arabia reveal multiple windows for human dispersal. Geology, advance online publication doi:10.1130/G36401.1).

The eastern part of the Empty Quarter is a vast bajada formed from coalesced alluvial fans deposited by floods rising in the Oman Mountains and flowing westwards to disappear in the great sand sea of dunes. Nowadays floods during the Arabian Sea monsoons are few and far between, and restricted to the west-facing mountain front. Yet, older alluvial fans extend far out into the Empty Quarter, some being worked for aggregate used in the frantic building boom in the UAE. In one of the quarries, about 100 km south of the Jebel Faya Upper Palaeolithic tool site , the alluvial deposit contains clear signs of cyclical deposition in the form of 13 repeated gradations from coarse to fine waterlain sediment, each capped by fossil soils and dune sands. The soils contain plant remains that suggest they formed when the area was colonized by extensive grasslands formed under humid conditions.

Dating the sequence reveals that 6 of the cycles formed over a 10 thousand-year period between 158 to 147 ka, which coincides with a peak in monsoon intensity roughly between 160 and 150 ka during the glacial period that preceded the last one. Three later cycles formed at times of monsoon maxima during the last interglacial and in the climatic decline leading to the last glacial maximum, at ~128 to 115 ka, 105 to 95 ka, 85 to 74 ka. So, contrary to the long-held notion that the Arabian Peninsula formed a hostile barrier to migration, from time to time it was a well watered area that probably had abundant game. Between times, though, it was a vast, inhospitably dry place.

English: SeaWiFS collected this view of the Ar...

Satellite view of the Arabian Peninsula. The Oman mountains sweep in a dark arc south eastwards from the Staits of Hormuz at the mouth of the Persian Gulf. The brownish grey area to the south of the arc is the bajada that borders the bright orange Empty Quarter (credit: NOAA)

The authors suggest that the climatic cyclicity was dominated by a 23 ka period. As regards the southern potential migration route out of Africa, via the Straits of Bab el Mandab, which has been highly favoured by palaeoanthropologists lately, opportunities for migration in the absence of boats would have depended on sea-level lows. They do not necessarily coincide with wet windows of opportunity for crossing the cyclically arid Arabian peninsula that would allow both survival and proceeding onwards to south and east Asia. So far as I can judge, the newly published work seems to favour a northward then eastward means of migration, independent of fluctuations in land-ice volume and sea level, whenever the driest areas received sufficient water to support vegetation and game. In fact most of NE Africa is subject to the Arabian Sea monsoons, and when they were at their least productive crossing much of Ethiopia’s Afar depression and the coastal areas of Eritrea, Sudan and Egypt would have been almost as difficult as the challenge of the Empty Quarter.

A tsunami and NW European Mesolithic settlements

About 8.2 ka ago sediments on the steep continental edge of the North and Norwegian Seas slid onto the abyssal plain of the North Atlantic. This huge mass displacement triggered a tsunami whose effects manifest themselves in sand inundations at the heads of inlets and fjords along the Norwegian and eastern Scottish coasts that reach up to 10 m above current sea level. At that time actual sea level was probably 10 m lower than at present as active melting of the last glacial ice sheets was still underway: the waves may have reached 20-30 m above the 8.2 ka sea level. So powerful were the tsunami waves in the constricted North Sea that they may have separated the British Isles from the European mainland by inundating Doggerland, the low-lying riverine plain that joined them before global sea level rose above their elevation at around the same time. Fishing vessels plying the sandbanks of the southern North Sea often trawl-up well preserved remains of land mammals and even human tools: almost certainly Doggerland was prime hunting territory during the Mesolithic, as well as an easily traversed link to the then British Peninsula. Mesolithic settlements close by tsunami deposits are known from Inverness in Scotland and Dysvikja north of Bergen in Norway and individual Mesolithic dwellings occur on the Northumberland coast. The tsunami must have had some effect on Mesolithic hunter gatherers who had migrated into a game-rich habitat. The question is: How devastating was it.

English: Maelmin - reconstruction of Mesolithi...

Reconstruction of Mesolithic hut based on evidence from two archaeological sites in Northumberland, UK. (credit: Lisa Jarvis; see http://www.maelmin.org.uk/index.html )

Hunter gatherers move seasonally with favoured game species, often returning to semi-permanent settlements for the least fruitful late-autumn to early spring season. The dominant prey animals, red deer and reindeer also tend to migrate to the hills in summer, partly to escape blood-feeding insects, returning to warmer, lower elevations for the winter. If that movement pattern dominated Mesolithic populations then the effects of the tsunami would have been most destructive in late-autumn to early spring. During warmer seasons, people may not even have noticed its effects although coastal habitations and boats may have been destroyed.

Splendid Feather Moss, Step Moss, Stair Step Moss

Stair-step moss (credit: Wikipedia)

Norwegian scientists Knut Rydgren and Stein Bondevik from Sogn og Fjordane University College, Sognda devised a clever means of working out the tsunami’s timing from mosses preserved in the sand inundations that added to near-shore marine sediments. (Rydgren, K. & Bondevik, S. 2015. Most growth patterns and timing of human exposure to a Mesolithic tsunami in the North Atlantic. Geology, v. 43, p. 111-114). Well-preserved stems of stair-step moss Hylocomium splendens still containing green chlorophyll occur, along with ripped up fragments of peat and soil, near the top of the tsunami deposit which has been uplifted by post-glacial isostatic uplift to form a bog. This moss grows shoots annually, the main growth spurt being at the end of the summer-early autumn growing season. Nineteen preserved samples preserved such new shoots that were as long as or longer than the preceding year’s shoots. This suggests that they were torn up by the tsunami while still alive towards the end of the growing season, around late-October. All around the North Sea Mesolithic people could have been returning from warm season hunting trips to sea-shore winter camps, only to have their dwellings, boats and food stores devastated, if indeed they survived such a terrifying event.

Glacial cycles and sea-floor spreading

The London Review of Books recently published a lengthy review (Godfrey-Smith, P. 2015. The Ant and the Steam Engine. London Review of Books, v. 37, 19 February 2015 issue, p. 18-20) of the latest contribution to Earth System Science by James Lovelock, the man who almost singlehandedly created that popular paradigm through his Gaia concept of a self-regulating Earth (Lovelock, J. A Rough Ride to the Future. Allen Lane: London; ISBN 978 0 241 00476 0). Coincidentally, on 5 February 2015 Science published online a startling account of the inner-outer-inner synergism of Earth processes and climate (Crowley, J.W. et al. 2015. Glacial cycles drive variations in the production of oceanic crust. Science doi:10.1126/science.1261508). In fact serendipity struck twice: the following day a similar online article appeared in a leading geophysics journal (Tolstoy, M. 2015. Mid-ocean ridge eruptions as a climate valve. Geophysical Research Letters, doi:10.1002/2014GL063015)

Both articles centred on the most common topographic features on the ocean floor, abyssal hills. These linear features trend parallel to seafloor spreading centres and the magnetic stripes, which chart the progressive additions to oceanic lithosphere at constructive margins. Abyssal hills are most common around intermediate- and fast-spreading ridges and have been widely regarded as fault-tilt blocks resulting from extensional forces where cooling of the lithosphere causes it to sag towards the abyssal plains. However, some have suggested a possible link with variations in magma production beneath ridge axes as pressure due to seawater depth varied with rising and falling sea level through repeated glacial cycles. Mantle melting beneath ridges results from depressurization of rising asthenosphere: so-called ‘adiabatic’ melting. Pressure changes equivalent to sea-level fluctuations of around 100-130 m should theoretically have an effect on magma productivity, falls resulting in additional volumes of lava erupted on the ocean floor and thus bathymetric highs.

English: A close-up showing mid-ocean ridge to...

Formation of mid-ocean ridge topography, including abyssal hills that parallel the ridge axis. (credit: Wikipedia)

A test of this hypothesis would be see how the elevation of the sea floor adjacent to spreading axes changes with the age of the underlying crust. John Crowley and colleagues from Oxford and Harvard Universities and the Korea Polar Research Institute analysed new bathymetry across the Australian-Antarctic Ridge, whereas Maya Tolstoy of Columbia University performed similar work across the Southern East Pacific Rise. In both studies frequency analysis of changes in bathymetry through time, as calibrated by local magnetic stripes, showed significant peaks at roughly 23, 41 and 100 ka in the first study and at 100 ka in the second. These correspond to the well known Milankovitch periods due to precession, changing axial tilt and orbital eccentricity: persuasive support for a glacial control over mid-ocean ridge magmatism.

Enlarged by 100% & sharpened file with IrfanView.

Periodicities of astronomical forcing and global climate over the last million years (credit: Wikipedia)

An interesting corollary of the observations may be that pulses in sea-floor eruption rates emit additional carbon dioxide, which eventually percolates through the ocean to add to its atmospheric concentration, which would result in climatic warming. The maximum effect would correspond to glacial maxima when sea level reached its lowest, the reduction in pressure stimulating the greatest magmatism. One of the puzzling features of glacial cycles over the last million years, when the 100 ka eccentricity signal dominates, is the marked asymmetry of the sea-level record; slowly declining to a glacial maximum and then a rapid rise due to warming and melting as the Earth changed to interglacial conditions. Atmospheric CO2 concentrations recorded by bubbles in polar ice cores show a close correlation with sea-level change indicated by oxygen isotope data from oceanic sediments. So it is possible that build-up of polar ice caps in a roundabout way eventually reverse cooling once they reach their greatest thickness and extents, by modulating ocean-ridge volcanism and thereby the greenhouse effect.

January 2015 photo of the month

Angular unconformity on the coast of Portugal at Telheiro Beach (credit: Gabriela Bruno)

Angular unconformity at Telheiro Beach, Portugal (credit: Gabriela Bruno)

This image posted at Earth Science Picture of the Day would be hard to beat as the definitive angular unconformity. It shows Upper Carboniferous  marine metagreywackes folded during the Variscan orogeny overlain by Triassic redbeds. Structurally it is uncannily similar to Hutton‘s famous unconformity at Siccar Point on the coast of SE Scotland, although the tight folding there is Caledonian in age and the unconformable redbeds are Devonian in age.

Human-Neanderthal cohabitation of the Levant

The earliest known remains of anatomically modern humans outside of Africa were found unearthed from the Skhul and Qafzeh caves in what is now northern Israel. Their context was that of deliberate burial at a time when climate was cooling from the last interglacial, between 90 to 120 ka. The Levant was also the repository for a number of well-preserved Neanderthal skeletons, most dating to between 35-65 ka, including ten individuals at Shanidar in today’s northern Iraq, some of whom were also deliberately buried including one whose grave reputedly contained evidence for a floral tribute. The 25 ka gap between the two populations has previous been regarded as evidence for lack of contact between them. However, the Tabun Cave in modern Israel has yielded tools attributed to Neanderthal Mousterian culture that may indicate their intermittent presence from 200 to 45 ka, and fossils of two individuals dated at ~122 and ~90 ka. The remains at Skhul and Qafzeh are significantly more rugged or robust than African contemporaries and have been considered possible candidates for Neanderthal-modern human hybrids. But whatever their parentage, it seems they became extinct as the climate of the Levant dried to desert conditions around 80 ka.

View of the exterior of Shanidar Cave, taken d...

Entrance to the Shanidar Cave, northern Iraq, occupied by Neanderthals between 35-65 ka (credit: Wikipedia)

A more promising overlap between modern human and Neanderthal occupation comes with the discovery by a group of Israeli, US, Canadian, German and Austrian scientists of a much younger anatomically modern human cranium from the Manot Cave, also in northern Israel (Herschkovitz, I. and 23 others 2015. Levantine cranium from Manot Cave (Israel) foreshadows the first European modern humans. Nature (online) doi:10.1038/nature14134). The cranium has a U-Th radiometric age of ~55 ka, well within the time span of Neanderthal occupation. Moreover, Manot Cave is one of a cluster of occupied sites in northern Israel, with separations of only a few tens of kilometres: undoubtedly, this individual and companions more than likely met Neanderthals. The big question, of course, is did the neighbours interbreed? If so the Levant would be the confirmed as the probable source of hybridisation to which the DNA of non-African living humans points. There may be a insuperable difficulty in taking this further: it is thought that the high temperatures of the region, despite its dryness, may have destroyed any chance of reconstructing ancient genomes. Yet one of the first Neanderthal bones to yield useful genetic material was from Croatia, which is not a great deal cooler in summer.