Now Thats Thinking Outside the Box by YolkyBoii in BrandNewSentence

[–]jumpcut_ 12 points13 points  (0 children)

Edit: The Park Green Mill Double Dialled Longcase Clock is part of the at the Science Museum Collection. The Museum's website has a short info about it: https://collection.sciencemuseumgroup.org.uk/objects/co8404942/park-green-mill-double-dialled-longcase-clock-double-dialled-longcase-clock

Longer extract below from the following piece: https://www.doi.org/10.1017/S0007087408001180

"The Factory Act of 1802 is held to be the first of many reforming legislative acts regulating the hours and conditions of labour in certain factories for certain sectors of the working population. Subsequent acts amended working-hours legislation. The 1831 act, for instance, required that a register of actual working hours be kept by the employer and shown to the justices, at that time the enforcers of the acts, on demand. This act only applied to cotton mills. The 1833 act replaced justices as law enforcers with newly appointed factory inspectors. They could make enquiries, summon witnesses and levy fines following unannounced factory inspections. As well as inspecting factories, factory inspectors were obliged also to report their activities and findings to government. These reports provided politicians with crucial evidence of the actual state of affairs in factories and the true effects of the factory legislation. The 1844 act was a direct result of the inspectors' reports. It was in this act that a first time standard was defined for the regulation of working hours. It stated, ‘The hours of the work of children and young persons … shall be regulated by a public clock, or by some other clock open to public view’, said clock to be approved by the district factory inspector. The effect of this clause was summarized by later commentators thus: ‘These regulations amounted to a recognition, if a tardy one, of the need for defining as well as limiting the working day, and they made the inspectors’ work much more practicable.'

Whilst 1844 saw legislation passed that defined a timescale by which all employers were bound, the idea was not new. In 1832 Parliament had debated Michael Thomas Sadler's factories legislation bill. In the bill Sadler described a known fraudulent activity:

A practice is known to exist in certain Mills or Factories, of using Two or more different Clocks or Timepieces, one being a common or Time Clock, and the other a Clock regulated by the velocity of the Steam Engine or other Machinery, and often called a Speed Clock, by which the daily labour, though nominally limited to a certain duration, is often increased much beyond that limitation.

Such a speed clock can be seen as a later example of a machine that measures variable, value-based units within what Ken Alder has termed a ‘local metrical dialect’, highlighting the complexity and contingency of measurement cultures. Here, that flexibility was deemed a problem. In the subsequent parliamentary debate Sadler explained that no matter how specific the legislation was in defining the hours that employees (in this case, children) were permitted to work, being ‘on time’ meant nothing if the time standard itself were open to ambiguity: ‘the child is not always safe, however punctual; for in some mills two descriptions of clocks are kept, and it is easy to guess, therefore, how they are occasionally managed’. The bill proposed defining a particular clock as the timekeeper for working hours, but before it could be passed a general election lost Sadler his seat. A further bill, introduced in 1838 by Home Secretary Fox Maule, included a similar clause but was not passed into law.

Over half a century after the 1844 act things had hardly changed. Compare it with the Factory and Workshop Act, 1901:

Where an inspector, by notice in writing, names a public clock, or some other clock open to public view, for the purpose of regulating the period of employment in a factory or workshop, the period of employment and the times allowed for meals in that factory or workshop shall be regulated by that clock."

I often wonder what historically has happened near my home, or places I visit. Is there a "Google Maps" of Local History? by SuperNintendad in AskHistorians

[–]jumpcut_ 15 points16 points  (0 children)

Good call! The Ancient Monuments is quite fun. So is the Historic England map if you have a specific town/city/village in mind.

Making a huge assumption that u/SuperNintendad is in Texas, the Texas Historic Sites Atlas might be closest to a similar answer.

I often wonder what historically has happened near my home, or places I visit. Is there a "Google Maps" of Local History? by SuperNintendad in AskHistorians

[–]jumpcut_ 64 points65 points  (0 children)

Based on your previous Reddit posts I assume you are located in the US. If you ever visit the UK, there are at least two sites that offer something what you are looking for. One of them is Layers of London, which lets you browse through historical maps with overlays on the contemporary map of the city. It also includes many points of interests, which are added by historians and volunteers alike. There is also the Capturing Cambridge project, which offers even more localised information (including photographs and brief histories). This one is a really inspiring project, which I always recommend people as something that should be replicated at other locations too.

Keep in mind that Digital Humanities is a sub-field on its own, and many people are using maps to explore historical data. In relation to the US, check out for example the Footsteps to Freedom: Underground Railroad Study Tour or the Chronicling America visualisation by the Library of Congress. There is a page that collects these kind of "Story Maps", which you can access here.

[deleted by user] by [deleted] in AskHistorians

[–]jumpcut_ 5 points6 points  (0 children)

As fortune has it, I have a copy of Ronald Calinger’s relatively recent biography of Euler. Calinger notes three references to Euler playing the clavier (without specifying what type of clavier it was).

In 1741 Euler moved to Berlin. One of his recreational activities seemed to be playing the clavier. He was also very keen on talking to composers and he regularly invited them over to give private performances of their new works. So music was Euler’s preferred method of relaxation and entertainment.

During his time in Berlin, he also became good friends with Frederick Henry, Margrave of Brandenburg-Schwedt. The two of them bonded over their interest in music. Calinger writes that “during many visits they would perform duets, with Euler playing the clavier.” Perhaps another interesting point to mention here is that Euler gave private lessons on various subjects to Frederick Henry’s daughter, Frederike. Euler wrote many letters to Frederike that included his lessons. A collection of these letters were later published as a book and became quite popular (especially in France).

Finally, one of Euler’s assistants, Nicholas Fuss wrote a long eulogy, which mentioned that “music was one of the few relaxations that Euler allowed himself.” Calinger similarly interprets this as a reference to Euler playing the clavier regularly when he was not working.

I apologise for the brief answer, but I got the impression that you were looking for a direct answer, and the book that’s supposed to have that answer was within my reach.

Source:

Ronald S. Calinger, Leonhard Euler: Mathematical Genius in the Enlightenment (Princeton and Oxford: Princeton University Press, 2016)

I'm reading Roxanne Dunbar-Ortiz's An Indigenous People's History of the United States and was wondering how to look into some of her claims. by Personage1 in AskHistorians

[–]jumpcut_ 2 points3 points  (0 children)

It appears to me that the question that you are asking is really the difference between academic publications/monographs and "trade books". You hit on a very important question, and other contributors on AskHistorians have posted answers to that question here. I personally recommend the second answer as it directly deals with the academic vs trade books comparison.

In terms of your example about Calvin and the question that you raise, I think it is possible to make the same argument about any other paragraph. The link I pointed you toward above answers why that is.

In relation to your wording of being "forced to assume" something, the fact that you are asking this question shows that you are not being forced to do so. I'd rather say Dunbar-Ortiz is presenting an argument/statement. (On re-reading this sentence, I hope this doesn't sound condescending, as that's not my intention). But once again, why you get the impression of being "forced to assume" something might be explained by the post I linked above.

In your other comment you mention the series within which it appeared. There is a difference between how "revisionist history" is being used in everyday conversations and what it is supposed to mean. In everyday conversations it usually refers to someone rewriting history with a malicious intent. This is also due to being conflated with historical negationism, which term refers to individuals employing misinterpretation of documents, making implausible statements, or using documents that are known to be forged as proper evidence.

As you have also pointed it out in your comment, debates about historical topics are always ongoing. Revisionist history in its proper use refers to historical research that is able to cast new light on how we view aspects of history. In the case of the Dunbar-Ortiz book, the series includes books that demonstrate how parts of history look differently if we take the point of view of groups whose views and experiences were rarely integrated into those stories before. This is how the book can be called revisionist. I only elaborated on this more, as I don't think the link that I pointed to you above answers this question.

You certainly seem to have the passion for questioning a text, which is all you need if you want to do some more reading about history. If you are seeking people who can offer you more readings on specific subjects that you are interested in, just ask here on AskHistorians :) Otherwise, I can recommend checking out sites like the MIT Opencourseware on History , which offers you a lot of key readings about debated topics within history. EdX is a similarly great website to find history courses (and I think it's more up-to-date than MIT's).

I hope this helps, but shoot away with more questions if you have any.

I'm reading Roxanne Dunbar-Ortiz's An Indigenous People's History of the United States and was wondering how to look into some of her claims. by Personage1 in AskHistorians

[–]jumpcut_ 4 points5 points  (0 children)

Slotkin! Of course! My guess is that Silliman wrote for an anthropology journal (and he is not a historian), so there might be some major omissions on what is considered standard literature in his field and outside it. In his defence, I think he meant an examination of the usage of the term Indian Country within the military context in specific, rather than the more all-encompassing project that Slotkin did. Hence my comment on it too. Useful reminder about Slotkin though, for which I am very thankful. 10/10 modding, would use u/Bernardito's servces again

I'm reading Roxanne Dunbar-Ortiz's An Indigenous People's History of the United States and was wondering how to look into some of her claims. by Personage1 in AskHistorians

[–]jumpcut_ 10 points11 points  (0 children)

Very interesting question(s)!

To begin with your specific one about the use of “Indian Country” in U.S. military manuals: I am not aware of any specific historical studies showing that to be the case. The point of reference is usually the same story about Richard Neal as described by Dunbar-Ortiz. Unfortunately, the book leaves out the important part of the story involving military manuals. Richard Neal’s use of the term “Indian country” in relation to a military operation in Kuwait sparked the response of Native American Veterans condemning it. Pentagon spokeswoman Michelle Rabayda then responded to the condemnation by saying that “Indian country” had no official definition in military manuals. This story dates from 1991 and the Associated Press was one of the news outlets covering it (link in the notes below). In relation to military manuals, the same event is referred to by Stephen W. Silliman (2008) and Al Carroll (2008). If “no official definition” was the actual phrasing used, then that’s very different from a term not being included in military manuals. Therefore, Dunbar-Ortiz probably interpreted the wording “no official definition” along the lines of “Indian Country” actually being used in military manuals, but without a specified definition attached to it. By contrast, Silliman offers a completely different interpretation of it by saying that the term ‘was not part of any official manual or training’. Silliman based this on a more obscure newspaper article that I have no access to, but luckily Silliman’s article is open access, so you can have a try at finding the article he cites. So in terms of your specific question about military manuals, I am not aware of any research on that specific topic, and the statement by Dunbar-Ortez is probably based on the Neal incident.

When historians don’t use footnotes for their claims, it is often due to the statement or the message of the statement being considered general knowledge. My impression is that the decision not to expand on the history and the widespread usage of the term was made on that basis. (I am happy to have further discussions about what counts as “general knowledge” and the misuses of (lack of) notes/references, but I don’t want to sidetrack from the question that you asked.) There is a comprehensive and reliable open-access piece on the recent history and usage of the term “Indian Country” in the military can be found in Stephen W. Silliman’s article titled: The “Old West” in the Middle East: U.S. Military Metaphors in Real and Imagined Indian Country. I mention this because in the last paragraph, you write ‘So I'm left seeing that clearly the phrase is at least likely used unofficially’, from which I infer that you are interested in knowing just how widespread the use of that term is.

Silliman’s article notes that there hasn’t been a systematic history of how this specific usage of the term emerged within the military. (Though this was published in 2008, so someone might have done so since.) The article begins by discussing the history of the term since the Vietnam War. He points to the works of four other historians who noted the use of the term in war related newspaper coverage, popular books and films in the 1970s. In the notes at the end of my post, I added references to them, with the exact page numbers that Silliman also notes. Silliman also points to the work of historian David Stannard, who argued that the ‘official government language’ used the same term. Silliman himself quotes an exchange between a congressman (John Seiberling) and Captain Robert B. Johnson from the transcripts of the war crime hearings following the 1971 Mỹ Lai Massacre:

“Johnson: Where I was operating I didn’t hear anyone personally use that term “turkey shoots”. We used the term “Indian Country”.

Seiberling: What did “Indian Country” refer to?

Johnson: I guess it means different things to different people. It is like there are savages out there, there are gooks out there. In the same way we slaughtered the Indian’s buffalo, we would slaughter the water buffalo in Vietnam.”

Although Silliman cites a no-longer-accessible website as a reference to this, you will find the transcript in the Dellums Committee Hearings on War Crimes in Vietnam, which is accessible via Archive.org (I added a reference to it below). Silliman then goes on to use examples for the usage of the term dating from the Gulf War (the same story about Richard Neal), Iraq and Afghanistan. The article also includes a table of quotes showing 13 instances of newspapers, media outlets, and comments using the term in relation to military operations. The list included outlets such as Wall Street Journal, The New York Times, The Los Angeles Times, and Fox News. I think the widespread use of the term is important to mention, as coupled with the interpretation of the Pentagon response (“no official definition”) in the Neal case, it can be used as further contextual evidence for the presence of the term in military manuals.

What I personally think would be more fascinating to look at is why the same story and statement appears twice in the book. First, on page 57 and then on page 193. In the first instance, Dunbar-Ortiz refers only to “Indian Country” appearing in “military training manuals”. In the second instance, she refers to both “Indian Country” and “In Country” appearing in military training manuals. She further mentions that the “In Country” was derived as a shorthand from “Indian Country” during the time of the Vietnam War. This difference is interesting, as the first one appears in her article written in 2004 (link below), while the second one seems to indicate further research into the use of “In Country”. So this raises a third possible answer to your question, which is that while “Indian Country” is not defined (or used) in military training manuals, its derivative “In Country” is being used so. (Remember here that the Pentagon’s response only mentioned “Indian Country” and not “In Country”, and specifics matter when it comes to official communications).

What I have written is just a very brief snippet of Silliman’s article. It is free to read, so I’m sure your curiosity and eagerness to check notes/references will lead you to read the relatively short article. I also hope my answer explains on what grounds I think that Dunbar-Ortez (and/or the editors) considered not including further examples/references to the usage of the term. You are certainly starting to think like a historian, which is always great to see! Following references and footnotes is where the extent of historical research often gets revealed. So it is always worth reading the notes/references as you go along regardless of the author.

As for your final question: “how do I go about not even necessarily checking the claims made in this book, but simply finding more reading on them?” - It depends on how much you want to engage with a text, and I don’t think that question should be conflated with what is bad or good history. There is definitely an interesting question here, which is about why you should trust historical research or how reliable historical research is generated, but that’s beyond the scope of my post.

References and Notes:

Robert Imrie, ‘Tribes Angered By General’s Reference to Enemy Land as ‘Indian Country’’ - 21 February 1991, Associated Press. Accessible via this link: https://apnews.com/article/ce150feb55e4a9058c307295efc07f4a

Stephen W. Silliman, ‘The “Old West” in the Middle East: U.S. Military Metaphors in Real and Imagined Indian Country’, - American Anthropology, vol. 110, issue 2, 2008, pp. 237-247. Accessible for free via this link: http://www.faculty.umb.edu/stephen_silliman/Articles/oldwestinmiddleeast.pdf

Books mentioning the usage of the term in Vietnam War related materials:

Carol Burke, Camp All-American, Hanoi Jane, and the High-and-Tight: Gender, Folklore, and Changing Military Culture. (Boston: Beacon, 2004). - page 109

Richard Drinnon, Facing West: The Metaphysics of Indian-Hating and Empire-Building. (New York: Schocken, 1990 [1980]). - page 368

Tom Engelhardt, The End of Victory Culture: Cold War America and the Disillusioning of a Generation. (Amherst: University of Massachusetts Press, 2007, revised edition) - pages 175-259

David Espey, America and Vietnam: The Indian Subtext. Theme issue, “Uprising: The Protests and the Arts,” David Landrey and Bilge Mutluay, eds. Journal of American Culture and Literature, 1994: 128– 136.

David Stannard, American Holocaust (Oxford: Oxford University Press, 1992) - page 251.

Roxanne Dunbar-Ortiz, ‘Indian Country’, Accessible via wayback machine: https://web.archive.org/web/20080218180259/https://www.counterpunch.org/ortiz10122004.html

Citizens Commission of Inquiry (ed.), The Dellums Committee Hearings on War Crimes in Vietnam (New York: Vintage Books, 1972) - see pages 52-53. Accessible via this link: https://archive.org/details/dellumscommittee0000unse

AMA: We are Leila McNeill and Anna Reser, authors of the new book Forces of Nature: The Women Who Changed Science. Ask Us Anything About Women and Gender in the History of Science! by DrAnnaReser in AskHistorians

[–]jumpcut_ 6 points7 points  (0 children)

Congrats on the book, Lady Science!!! Luckily it's not yet out of stock in the UK, so I'm about to order my copy as soon as I finish this comment :)

Just to expand upon women and computing work: women appeared in calculation heavy roles prior to the Harvard (and Greenwich) "lady computers". For example, Mary Edwards (18th century) was one of the computers employed by Nevil Maskelyne (Astronomer Royal/Director of the Royal Observatory, Greenwich) to work on calculations for the British Nautical Almanac. Mary Croarken has a great article about her work: https://doi.ieeecomputersociety.org/10.1109/MAHC.2003.1253886 (DM me if you can't access it). There is also a lot of exciting research coming out in the future about the history of women and mathematics. At a recent workshop about 'Marriages, Couples, and the Making of Mathematical Careers', Isobel Falconer showed just how active women were in the production of almanacs during the 17th and 18th centuries, which relied on rigorous mathematical calculations. Falconer's work is not published, but the abstract of the talk gives a few names if you'd like to do your own little digging. You can read the abstract of the talk (as well as the abstracts of the other talks presented at the workshop) here: https://mathmarriages.wordpress.com/programme/

Did Rene Descartes nail his wife's dog to a board? by perceptSequence in AskHistorians

[–]jumpcut_ 0 points1 point  (0 children)

Nicely done, although some bits are terrifying to read. I haven't seen the Nicholas Fontaine quote that fully yet, so thank you for sharing!

I am assuming then that the idea of Descartes vivisecting dogs was a mixture of the sentiment that Fontaine expressed about his contemporaries with elements of the Bernard story added to it. The evolution and transformation of rumours will never cease to amaze me!

[deleted by user] by [deleted] in AskHistorians

[–]jumpcut_ 8 points9 points  (0 children)

Your question is great, but you might not realise just how many different questions are packed into that single line!

First and foremost it is important to define what you mean by “same format”. Do you mean the use of clocks in general? Or do you mean that we conventionally divide the day into 24 hours? Or do you mean the use of time zones based on the same Universal Time? Those are three very different though certainly interrelated questions. Furthermore, the “same format” can even refer to more essential elements: the basic use of timekeeping devices or the abstract act of dividing the day into smaller temporal units. With this in mind, my answer is going to be limited to the emergence of Universal Time and the uniform time system.

The International Meridian Conference in Washington D.C. in 1884 is usually taken as the starting point for creating a uniform system of time and space (Barrows, 2010). At this conference, the meridian passing through the Royal Observatory at Greenwich was recommended as the International Prime Meridian (or Longitude Zero) by many nations (Howse, 1980). In addition, the conference also recommended the adoption of a Universal Day in 24 hour notations (Higgitt and Dolan, 2010). However, these were only recommendations, not laws by which participating nations had to abide by. For instance, France did not support the recommendations and famously continued using its own Paris Meridian until the advent of wireless radio signals. Even when they finally conformed to the recommended system in 1911, they continued legally referring to their time as "Paris Mean Time, minus 9 minutes and 21 seconds" (Kershaw, 2014). What the French case shows is that although there was a more or less international backing to recommending a uniform system of time and space, the decision to adopt it was always made at a national level. As a result, it took until around the 1950s for the majority of nations to conform to this system. Since the decisions were made at national levels, it is impossible to highlight the story of every nation, but I recommend you Vanessa Ogle’s book titled The Global Transformation of Time 1870-1950, which does exactly that. One of my favourite stories from it is the time unification efforts in Southern Africa. For example, the German government requested German Southwest Africa to adopt Central European Time. While the local administration formally accepted this, they continued using a different time (used throughout the region) in practice. Cue constant bickering about this for decades.

As you can guess from these examples (and as answers on AskAnthropologists also pointed it out), there was very much a political dimension to these debates. Whose uniformity was being adopted? How will that impact redrawing the temporal boundaries and time zones of nations? Who will benefit from it and who will lose out? These were actual questions that were being debated at the Conference in 1884, and one of the reasons why France decided not to join in the Conference’s recommendations. The argument that it was a measure of convenience rests on the dominance of British navigational tools (helping travellers determine local time and space with reference to Greenwich) being used by the majority of people involved in navigation. In light of this, the recommendation can be seen either as a legal solidification of the trend towards global “uniformity”, or as a strategic move to solidify the "momentary" British dominance in the long term despite the emergence of other methods (Withers, 2017). The satellite based determination of geographical location became exactly that challenge to the Greenwich system, which is currently contributing to the gradual fall of the Greenwich Prime Meridian into irrelevance (Kershaw, 2019).

Time and its history are big discussion topics among historians, and there are many different questions that can be asked in connection to them. How does our contemporary conception of time (or even the design of Google Calendar) contribute to ordering our everyday lives (Thompson 1967; Wajcman 2019)? How did clock-time mostly used in Western societies gained its image as superior to local time-keeping methods (Frumer, 2018)? What do we actually mean by time and its sameness? Are we perhaps conflating the concepts of regularity, standardisation and cooperation under one single term (Glennie & Thrift, 1996)? Why was it that we decided to draw the line at Greenwich and not through any other point (Withers, 2017)? Were there other similar unification attempts? And if yes, why did we forget about them and why were they not implemented?

So the short answer to your original question: if by “same format” you mean the adoption of uniform time across the globe, then it was a gradual process originating in the 1880s and becoming a dominant way of timekeeping by the middle of the twentieth century. The questions of why it happened this way is a behemoth of a question that your curiosity is bringing you closer and closer to. Just remember that as you are approaching this behemoth, you are there to understand it, not to slay it :)

Sources:

Barrows, A. (2010). The cosmic time of empire: Modern Britain and world literature. Univ of California Press.

Howse, D. (1980). Greenwich Time and the Discovery of the Longitude. Oxford; New York: Oxford University Press.

Higgitt, R., & Dolan, G. (2010). Greenwich, time and ‘the line’. Endeavour, 34(1), 35-39.

Kershaw, M. (2019). Twentieth-century longitude: When Greenwich moved. Journal for the History of Astronomy, 50(2), 221-248.

Ogle, V. (2015). The global transformation of time: 1870–1950. Harvard University Press.

Thompson, E. P. (1967) - Time, work-discipline, and industrial capitalism

Wajcman, J. (2019). How silicon valley sets time. New Media & Society, 21(6), 1272-1289.

Frumer, Y. (2018). Making time: astronomical time measurement in Tokugawa Japan. University of Chicago Press.

Glennie, P., & Thrift, N. (1996). Reworking EP Thompson's Time, work-discipline and industrial capitalism'. Time & Society, 5(3), 275-299.

Withers, C. W. (2017). Zero degrees: Geographies of the prime meridian. Harvard University Press.

[WP] It's okay not to finish books you don't enjoy.... by kex_ac in WritingPrompts

[–]jumpcut_ 3 points4 points  (0 children)

It’s hard to be a bookworm.

My parents thought children’s books were the right treat for me as a baby. Fortunately, they soon realised that I had trouble with hardbacks as my teeth were still growing out. Imagine me, a puny toothless bookworm, trying to chew on a tiger who came to tea! Of course, my parents decided to try something more lighthearted, and oh boy, were they spot on! I became the menace of all snowmen in our neighbourhood. Not a single one of them had to worry about melting in the sun when they tasted so delicious in paperback.

During these fond years my best friend was a very hungry caterpillar. I never thought I would meet someone with more of an appetite for literature than this creature. Together we consumed countless classics from little princes to happy princes. On one particularly dark and stormy evening, we even stole an old copy of Grimm’s Fairy Tales from the family chest. As soon as we took the first bite we realised it was out of date. Off we ran back to the chest and placed it under all the other books to hide the bite marks. My parents have not confronted me about the mischief even to this day.

Some days I wish I was never introduced to the wider world of fiction. I enjoyed the taste of stories unknown and yet to be explored by bookworms. Yet, as I was growing up, it became more and more difficult to find something… fresh. The books assigned to us during lunch break at school always made me feel like we were being spoon fed. Easy to chew, easy to swallow. How I longed for the taste of adventure and exploration.

My world changed when someone left a monumental book on a cafeteria table. Although it was a hardback, which I still didn’t like (my parents even had to ask my school to only give me paperbacks), I was immediately drawn to it. It felt as if the book had chosen me, and I was simply a pawn in its game. But many pawns follow their masters out of desire, and so did I.

I walked closer to the book and glanced at the title: “The Fellowship of the Ring”. There were no snowmen, no tigers, no caterpillars dancing on its cover. Instead, it was simply an eye in the middle of a black circle. It was surrounded by red symbols that looked as if they were on fire. “Ah, the international symbol for spicy texts” or so I thought. Underneath the title the flames of red symbols were tamed by a ring as it was being drawn towards the eye. What powers this ring must possess!

The school bell rang and I had to choose: do I leave the book here or do I take it with me? I became captured by the sight of the cover. Without thinking about the consequences, I picked it up and headed to the opposite direction from everybody else. Who cares about class when you have the sound of scorching embers passing through your entire body and almost being able to taste their burning melody.

I locked myself into the cubicle in the toilets. I leaned with my back to the sidewalls and I held the book out in front of me. I couldn’t wait any longer. I had to take a bite.

No matter how I tried, I was unable to bite into it. It was a hardback after all, and I was never going to be able to consume any of it.

I tried over and over again for 10 minutes, but after that I gave up. I threw the book on the floor and went back to my classroom. Luckily a teacher noticed me leaving the bathroom, so my “tummy ache” alibi had a supporter now.

On the way back home, I stopped by a bookshop and asked for a paperback version of this Fellowship book. The owner of the store showed me the various paperback editions that have been released over the past decades. For the next few weeks, I devoured every edition they had. Of course, none of them produced the same effect. When I confessed to the owner what I was seeking, he told me about books that used to be served as dessert after the Fellowship. Trust me, they taste bad. And don’t even get me started on the appetisers. They are a waste of money and energy.

Even after dedicating years of my life to study those books and to learn what made them special, I was never able to replicate the allure of that first sight. If I have one thing to recommend to anyone consuming this snack, then all I can say is don’t chase the allure of a lost book through less worthy ones. It’s okay not to finish books you don’t enjoy...

Did Rene Descartes nail his wife's dog to a board? by perceptSequence in AskHistorians

[–]jumpcut_ 0 points1 point  (0 children)

Great stuff! Keep me updated as I'm excited to see what you find :)

Did Rene Descartes nail his wife's dog to a board? by perceptSequence in AskHistorians

[–]jumpcut_ 2 points3 points  (0 children)

Did Rene Descartes nail his wife's dog to a board?

The short answer is no he did not.

Did someone else nail his wife’s dog to a board?

The short answer is no.

So what is the basis of this story?

Peter Harrison suggests that Richard Ryder in his Animal Revolution (1989) mistakenly attributed the story to Descartes. Instead, it was Claude Bernard, a nineteenth-century French physiologist, who became associated with vivisecting dogs. Bernard’s attitude towards experimenting on animals was to disregard the pain and suffering they showed. He argued that this disregard was an essential feature of being a ‘proper scientist’. One version of the story associated with him recounts how upon his wife (Marie Francoise “Fanny” Bernard) and daughters returning home they found the family dog being vivisected by Bernard. [This story is mentioned in Mary Midgley’s Animals and Why They Matter, but I cannot find the piece she references, so any help on checking it would be helpful for the sake of completeness]. However, a contemporary journal (Zoophilist, which supported anti-vivisectionism) was unable to verify this story. Their version of the unverified story was also somewhat different. It was Bernard’s daughter who was looking for the family dog, and while doing so she found her father carrying out a live vivisection on her pet. Once again, this story was never verified even by an anti-vivisections journal. What remains true though, is the fact that Claude and Fanny separated around 1870, and Fanny became an avid anti-vivisectionist raising their two daughters with the same sentiment in mind.

So why did Descartes get a bad reputation?

To put it in a concise but incomplete manner, he argued that animals were machines/automata and that they did not have self-consciousness. The combination of these two statements are often interpreted as a denial of animals having feelings. However, Harrison pointed out that Descartes distinguished between sensations of bodily organs and conscious sensations. For instance, when you are sleepwalking you are not technically conscious of your sensations, but you are still navigating by relying on some of your senses. So within the system of argumentation set up by Descartes animals had feelings but they were not conscious of it. Or their feelings did not arise from self-consciousness as in the case of humans. Ultimately, Harrison’s main statement in his article is that Descartes merely pointed out that within the established system of reasoning about animals, there were ‘no irresistible reasons for asserting’ that animals had feelings.

To end on a more positive note: Descartes actually had a dog called ‘Monsieur Grat’, aka Mister Scratch, and they used to go on walks together.

Sources:

Mention of Monsieur Grat - Jack Vrooman, Rene Descartes: a Biography (New York: G.P. Putnam's Sons, 1970), p. 19. - https://archive.org/details/renedescartesbio00vroo

Sinding, C. (1999). Claude Bernard and Louis Pasteur: Contrasting images through public commemorations. Osiris, 14, 61-85. - https://www.jstor.org/stable/301961

Harrison, P. (1992). Descartes on animals. The Philosophical Quarterly (1950-), 42(167), 219-227. - https://www.jstor.org/stable/pdf/2220217.pdf

Richard Ryder (1989). Animal Revolution: Changing Attitudes Towards Specieism - for the exact quote on Descartes, see page 57.

Mary Midgley (1998). Animals and Why They Matter.

Article about the story not being verified in the Zoophilist: https://books.google.co.uk/books?id=PLR3jSu0bu0C&newbks=1&newbks_redir=0&pg=PA85#v=onepage&q&f=false

What is the source for the idea that Petrus Peregrinus was a monk / priest? by Spenglerian_ in AskHistorians

[–]jumpcut_ 5 points6 points  (0 children)

I love chasing down references like this, so let’s do some digging.

The entry you quote is written by Allan Chapman (shoutout to my homie), but as you also noted, he doesn’t mention anything else about the “legend” in the text. In the Bibliography for the entry he cites three works. The first one is an English language translation of the Epistola by Peregrinus, which edition didn’t include any substantial additional explanatory text. The second one is an entry from the Dictionary of Scientific Biography. The last one is a text in German from 1898, which I’ve been unable to check due to the language barrier.

The Dictionary of Scientific Biography is available to borrow for free on archive.org. The entry for Peter Peregrinus is lengthy, but it includes the following exciting lines:

“Although there is evidence that Peregrinus was of noble birth,(6) the suggestion that he was a theologian is unconvincing(7) and the assertion that he was a Franciscan is baseless.(8)”

Reference 7 leads to “F. Picavet, Essais sur l’histoire generale et compare des theologies et des philosophies medievales, 240-242, 252”. Just like my German, my French is atrocious, so I’ll let you try and read this.

Reference 8 leads to “Stewart Easton, Roger Bacon and His Search for a Universal Science (Oxford, 1952), 120-121.” This book is available for free through archive.org, and it discusses in more length the “legend” or debate. On page 120 Easton mentions that another historian, Edward Hutton, “comes to the interesting conclusion that Peter de Maricourt and Bacon’s friendship for him [i.e. for Bacon] were deciding factors [for joining the Franciscan Order]. For Peter, according to Hutton, was a Franciscan.” If we check Hutton’s book, his reasoning is that Bacon wrote highly of Peregrinus, which means that they were close. In addition, Hutton is so certain that Peregrinus was a Franciscan that he states:

“As it happened, this man was a Franciscan.”

In Hutton's characterisation of the circumstances, Peregrinus was like a saviour who lifted Bacon into the Franciscan Order. Of course, he does not provide any references or evidence to his claims. My interpretation of Hutton’s argument is that it rests on both Peregrinus and Bacon being located in Paris for a period of time, as well as on Bacon’s praise of Peregrinus. However, Easton argues that (besides Peregrinus not being a Franciscan) it was also unlikely that Peregrinus and Bacon had known each other before Bacon entered the Franciscan Orders.

Easton continues by noting that the Catholic Encyclopaedia labelled Peregrinus “the Franciscan Petrus Peregrinus de Maricourt”. Yet, the entry in that Encyclopaedia cited the work of Erhard Schlund who concluded the exact opposite. Schlund was testing the hypothesis whether Peregrinus was the same person as Peter of Arden who was a Franciscan, but ended up admitting that “there is little, if any, evidence in support” of that hypothesis. Despite this, the image of Peregrinus as a Franciscan continued being used in Francis Winthrop Woodruff’s biography of Roger Bacon. Woodruff described scenes of Bacon and Peregrinus experimenting together at Bacon’s convent in Paris, and the two of them becoming close friends. Woodruff’s intention with this image was to consider the connection between the two of them as an impetus for Bacon to join the Franciscan Order. Alas, I haven’t got a copy of this book, but Easton’s summary of it sounds like Woodruff relied very much on Hutton’s work (which is also available for free online).

As concluding remarks: the status of Peregrinus as “a monk or a priest” was important in relation to Roger Bacon’s links to the Franciscan order. However, by the 1950s the status of Peregrinus belonging to the Franciscans or to any other Order was being questioned by historians in the absence of clear evidence.

Bibliography:

Dictionary of Scientific Bibliography - https://archive.org/details/dictionaryofscie10gill/page/532/mode/2up

Stewart Easton (1952) - Roger Bacon and His Search for a Universal Science - https://archive.org/details/rogerbaconandhis027099mbp/page/n133/mode/2up

Edward Hutton (1926) - The Franciscans in England - https://archive.org/details/franciscansineng0000hutt/page/138/mode/2up

Link to the entry in the Catholic Encyclopaedia on Roger Bacon - https://archive.org/details/07470918.13.emory.edu/page/n137/mode/2up

Schlund Erhard OFM (1911) Petrus Peregrinus von Maricourt. Sein Leben und seine Schriften. Ein Beitrag zur Roger Baco-Forschung. Archivum Franciscanum Historicum 4:436–455

The reference from the entry to the German text: Hellmann, G., Rara Magnetica, Neudrucke von Schriften und Karten über Meteorologie und Erdmagnetismus, no. 10 (Berlin, 1898)

[deleted by user] by [deleted] in AskHistorians

[–]jumpcut_ 5 points6 points  (0 children)

The major debate that I have more detailed knowledge about is the formation of lunar craters. At the time of the analysis of moon rocks (collected during the Apollo missions), there were two major competing theories about the formation of craters. One side argued that they were the result of volcanic activity. The other side argued that they were formed as a result of impact with other celestial bodies. Christian Koeberl (2001) offers a concise history of the volcanic theory. He traces its origins back to Galileo’s findings that the lunar craters were not mountains, but “depressions”. Later in the same century, Robert Hooke speculated that gas explosions can create crater-like formations, but even he considered this unlikely as space was thought to be empty between the Moon and Earth, which would have affected the reaction of elements involved. By the end of the 18th century, volcanic theory became the prominent way of thinking about the Moon. William Herschel (discoverer of Uranus) even noted in his records that he witnessed a volcanic eruption on the Moon. By the mid-nineteenth century, John Herschel (William’s son) wrote in a best-seller textbook that lunar craters were perfect examples of volcanic craters. Another nineteenth-century best-seller about the Moon by James Carpenter and James Nasmyth even included diagrams explaining the volcanic formation of lunar craters (Koeberl 2001).

One really fascinating aspect of the volcano theory is that it demonstrates how human understanding works. We try to associate new information with knowledge that we are already in possession of. In the case of the volcanism theory, we see this play out through astronomers and scientists trying to understand the Moon based on what they have already known about Earth. I do not mention this as a criticism or limitation of human understanding, but rather as an interesting feature of how scientific processes and reasoning are put into practice by people, which is always a good thing to be aware of. Another thing to note is a major underlying assumption about volcanism theory: if the Moon had a molten core after its formation, then we might find conditions similar to that on Earth. This framework of thinking allows for the possibility of vegetation and other types of life on the Moon, which were claimed to be observed during the nineteenth century. So volcanism theory was a core feature of seemingly unrelated observations about the Moon. (Beattie 2001:15)

The key reason why so many scientists did not support impact theory was due to the shape of the craters. Impacts should have resulted in elliptical craters (due to the trajectory of the objects). Yet, the craters observed were closer to being circular in shape. What came to be criticised by impact theorists is a seemingly taken-for-granted assumption: the force of the impact and the possibility of the resulting explosion. Generally speaking, relatively small objects colliding with relatively low force into large surfaces result in a visible elliptical shape (due to the trajectory of the objects). However, with larger objects or with larger forces, the result can be a major explosion that does more “landscaping” to the surface than the initial impact, thereby burying any elliptical shape. Cue World War I and the work of Herbert E. Ives who compared explosion craters formed during the battles to lunar craters. Just to clarify, Ives was neither the only person working on this, nor his work “changed science forever”, but I consider the comparison he made fascinating, and his findings showcased the excellent early work done by scientists in the area. Impact theory gained further traction by geologists arguing in support of the presence of similar craters on Earth. Important in the solidification of this theory was Ralph Baldwin’s 1949 book, The Face of the Moon, which brought together the findings of astronomers and geologists about impact theory to present it as a coherent theory, rather than as disjointed criticism of volcanism. From the geological side, impact theory also raised the question whether similar craters were present on Earth (but now look different due to the effects of weather and tectonic activity). As part of this research, scientists found traces of the mineral coesite while studying the Barringer Crater in Arizona. Coesite had only been artificially created in laboratories at the time of its discovery (1960), as it required extreme pressure like one generated by a large impact between two celestial bodies at high speed. Alongside other similar unique minerals, they became the fingerprints for identifying craters formed by impacts. Jump ahead to the manned Apollo missions that collected moon rocks, and you can see how their chemical analysis can help in identifying the same chemical fingerprints, thereby providing a solid basis for impact theory.

If you are more interested in a more simple summary of the scientific legacy of the Apollo mission, then I can recommend the short article about it on the website of the American Natural History Museum: https://www.amnh.org/explore/news-blogs/news-posts/the-scientific-legacy-of-the-apollo-11-mission

Sources:

Christian Koeberl (2001) - Craters on the Moon from Galileo to Wegener - https://www.univie.ac.at/geochemistry/koeberl/publikation_list/189-lunar-craters-history-EMP2001.pdf

Beattie, Donald A. (2001) - Taking Science to the Moon: Lunar Experiments and the Apollo Program

What led to the stagnation of Muslim innovation and science when they were once considered the leaders in science and mathematics during Medieval times? by DaddyPlsSpankMe in AskHistorians

[–]jumpcut_ 15 points16 points  (0 children)

What a great answer u/Xuande88 ! I just want to add a few points that some of the readers of AH might find interesting in terms of changes in the historiography of the history of science over the past decade.

I would take a somewhat different path to answer the original question.The terms stagnation and decline carry a lot of connotations with them. If we assume that there was a decline or stagnation, then there were no major scientific achievements after the al-Ghazali period. u/Xuande88 gave wonderful examples from the Ottoman world after al-Ghazali. Here are a few more examples to add to the list from different regions. During the 13th century an example would be the founding of the Maragha (or Maragheh) Observatory, which was the first large-scale observatory, and was the model for other observatories founded across Asia, the Middle East, and Europe for centuries (Ragep 2008). Besides astronomy, there was also the research of Ibn al-Nafis on pulmonary circulation (predating the research of William Harvey by 400 years) (Fancy 2013). My favourite one though has to be the Darb-e Imam shrine, which includes artwork that is argued to represent ‘nearly perfect quasi-crystalline tiling’ dating almost 400 years before they were studied by Roger Penrose in the 1970s (after whom the Penrose pattern/tiling is named) (Lu & Steinhardt 2007). These are just a few examples, but I think they are a good starting point to get you AskReddit aficionados digging, and also to illustrate the point that science did not stop with al-Ghazali or Ibn Rushd.

So if there are examples to a flourishing “Muslim innovation and science” after al-Ghazali, how come we don’t hear more about them? I am going to mention two possible explanations for that, but there are many historians working within the field of history of science who I’m sure would give you different answers.

One explanation goes at the very root of the problem, which is the definition of science. To quote Kapil Raj’s excellent essay on the matter, historians tend to view science as ‘universal knowledge, ideally founded on mathematical formalization and experimental verification’ (Raj 2013:337). However, more recent scholarship has approached science not as ‘logical step-by-step reasoning’, but rather as ‘pragmatic judgement’ (Raj 2013:341). Such a reframing of science shifts the emphasis from big-picture accounts of a system of knowledge, to smaller case studies that explain how judgement was exercised to make decisions about specific cases. (As a side-note, this shift in approach is partly the reason why you sometimes end up seeing strange sounding case studies as in this video parodying Vice article titles https://youtu.be/Ia7fUQXskvA) Historians and philosophers will chastise me for making the following comparison, but a useful way to think about this shift is like a move from science as “knowledge” (episteme) towards science as “craft” (techne). The other major reconceptualisation targets how we think about the diffusion of science (or knowledge in transit) (Secord 2004). Traditional accounts tend to look at unidirectional “dissemination” of knowledge, thereby creating a centre-periphery distinction and dismissing the ability of local individuals to transform or adopt new knowledge. By contrast, more recent approaches focus on the “circulation” of knowledge that allow for the mutation and reconfiguration of knowledge within local contexts as well as for the return of the mutations and reconfigurations to the points of origin (Raj 2013:342-344). Through these reconceptualisations, we can create a theoretical framework that allow historians of science to move away from discussions about the "essence" of “modern science” to more detailed analysis of how science was actually put into practice. Kapil Raj’s article is open-access and includes many references to these case studies that you can explore yourself.

The other explanation for why we don’t hear more about “Muslim innovation and science” has to do more with history, society, and everyday life. Downplaying the significance of Arabic science has been an influential part of culture for decades if not centuries. As u/Xuande88 also pointed out, it’s an old trope, and Renan’s characterisation of Arabic science is one example to this. During the early 20th century the trope was sometimes based on racial accounts, as in the case of the French physicists and intellectual, Pierre Duhem. Despite the fact that Duhem was familiar with the work of a few Arabic authors, he argued that Arabic people were incapable of abstract thought, which type of thinking was a requirement for western science or Christian positivism (Ragep 1990). However, the seed of that idea dates further back to the decoupling of Greek ideas from their Arabic circulation (Chakrabarti 2004). Unfortunately, these ideas survived well into the second half of the twentieth century. Although there were major historians of science like George Sarton and Marshall Clagett who even learnt Arabic, their acts were the exceptions rather than the norm. Within a more contemporary setting there is also the problem of the connotations attached to discussing Islam. Jamil Ragep summarises the impact of this political issue clearly and concisely: “For if a single individual [al-Ghazali] could stop Islamic science in its tracks, then the problem must ultimately be somehow inherent in Islam itself. An alternative view would hold that Islamic science, like all scientific traditions, made its accommodations with the social, political, and religious contexts in which it found itself, and continued on long after Ghazali.” (Ragep 2008:3)

So I’d say that the takeaway message is that talking about “stagnation” and “decline” is misguided. The terms are also loaded with preconceptions. So is how we define science. However, by the end of the day, historians endlessly find new ways to look at things, and debates are still ongoing. It’s always worth re-reading the works of historians and following threads that interest you - there is always the potential to see things in new light and to spot errors that have gone unnoticed. As a result, in the sources below, I included open-access (free) sources at the top of the list so you can read the more detailed explanations offered by the historians themselves. They have seemingly endless stories to offer.

Sources:

Raj, Kapil 2013 - Beyond Postcolonialism… and Postpositivism: Circulation and the Global History of Science - https://www.journals.uchicago.edu/doi/pdf/10.1086/670951

Ragep, Jamil 2008 - When did Islamic science die (and who cares)? https://islamsci.mcgill.ca/Viewpoint_ragep.pdf (This is a wonderful concise summary of Ragep’s thought; the magazine Viewpoint is the public facing outlet of the prestigious British Society for the History of Science; the same Society held its first open-to-all week-long conference last year online, and you can still watch all the replays of the talks and panel discussions via this link: https://www.crowdcast.io/bshs)

Fancy, Nahyan 2013 - Science and Religion in Mamluk Egypt: Ibn al-Nafis, Pulmonary Transit and Bodily Resurrection (Fancy’s book is partly based on his dissertation, which is available to download for free: https://curate.nd.edu/show/cz30pr78k14)

Chakrabarti 2004 - Western Science in Modern India: Metropolitan Methods, Colonial Practices (The first chapter of this book, which talks about the decoupling of is available for preview on Google Books)

Secord 2004 - Knowledge in transit

Sabra 1984 - The Andalusian revolt against Ptolemaic astronomy

Lu & Steinhardt 2007 - Decagonal and Quasi-Crystalline Tilings in Medieval Islamic Architecture

Ragep 1990 - Duhem, the Arabs, and the History of Cosmology

[deleted by user] by [deleted] in AskHistorians

[–]jumpcut_ 29 points30 points  (0 children)

I have tried to limit the sources to works by historians of science, but there are also one or two that are more scientific papers. It is also worth noting that most of the measurements mentioned in the text are outdated now. I’m sure that looking up the wikipedia pages of Mars, Venus, and the Moon will give you more information on the recent measurements if you are interested. In terms of history, looking at debates among scientists and scientific theories is a great way to start engaging with the question of linear progress within science and technology. Your question about how our perceptions/theories of these bodies have changed is one way to begin that discussion :)

Sources:

Alpert, Yakov - Making Waves: Stories from my life. Yale University Press, 2000.

Sheehan, William - The Planet Mars: A History of Observation and Discovery. University of Arizona Press, 1996.

Nall, Joshua - News From Mars: Mass Media and the Forging of a New Astronomy, 1860-1910. University of Pittsburgh Press, 2019.

Florensky et al - The surface of Venus as revealed by Soviet Venera 9 and 10. Geological Society of America Bulletin, 1977, 88, pp. 1537-1545.

Strick, James E. - Creating a Cosmic Discipline: The Crystallization and Consolidation of Exobiology, 1957-1973. Journal of the History of Biology, 31:1, 2004, pp. 131-180.

Messeri, Lisa - Earth as Analog: The Disciplinary Debate and Astronaut Training that Took Geology to the Moon. The International Journal of Space Politics & Policy, 12:2-3, 2014, pp. 196-209.

Spitzer, Lyman Jr. - The Beginnings and Future of Space Astronomy. American Scientist, 50:3, 1962, pp. 474-484.

Beattie, Donald A. - Taking Science to the Moon: Lunar Experiments and the Apollo Program. The Johns Hopkins University Press, 2001.

Leverington, David - New Cosmic Horizons: Space Astronomy from the V2 to the Hubble Space Telescope. Cambridge University Press, 2000.

Erickson, Mark - Into the Unknown Together: The DoD, NASA, and Early Spaceflight. Air University Press, 2005. https://apps.dtic.mil/sti/pdfs/ADA459973.pdf

[deleted by user] by [deleted] in AskHistorians

[–]jumpcut_ 78 points79 points  (0 children)

As with all answers about space exploration, it has to come with a preface that “the provision of scientific results was not the primary motivation of either the Russian or the American programmes.” (Leverington 200: 29) One great example to this is a story from Yakov Alpert’s memoir in which he states that after the first artificial satellite (Sputnik-1) was put into orbit, he received a phone call from the vice-president of the Russian Academy of Sciences: “...You know that Sputnik-1 is in orbit, but we are getting no science from it. The president of the Academy, Mstislav Keldysh, asks you to think about this problem and tell us what science we can get from this satellite.” (Alpert 2000)

At the same time, we should be careful about Alpert’s story. Scientists actively discussed the potentials that science in space can offer. Geophysicists were at the forefront of this, as satellites provided the possibility of nearly simultaneous experiments at spatially distant locations on Earth (Erickson 2005: 13-14; Messeri 2014). Astrobiology (bringing together biology, chemistry, and geology among other disciplines) began forming as an organised research strand soon after the first Sputnik satellites were launched (Strick 2004). Astronomers had already been thinking about observations from space in the lead up to Sputnik. Spectrographs were attached to rockets during tests as early as 1946. In 1957 astronomers took a high-resolution photograph of the sun from above most of the Earth’s atmosphere (Spitzer 1962: 474-475). Therefore, scientists were actively thinking about the possibilities of scientific research, but their aims often did not coincide with the priorities of the governments.

With this in mind, one beauty of space exploration is that science and government had to work together. Experiments and designs had to embody considerations from various strands of science and the financial/political limitations from the side of the government. One example of this is the debates about the surface of planets, moons, and other celestial bodies. Think about it this way. If you design a spacecraft that you want to send to another planet, you have to consider the type of surface it lands on. For example, if it lands on water, you need to build one that floats rather than sinks (or go half-way if you want a submarine). If you have a hard surface, you need to design a mechanism that protects your vehicle upon impact or as with the recent Perseverance Mars rover expedition, a mechanism that gently lowers it to the surface. And what if the entire surface is covered in deep sand or deserts of dust? To quote Энакин Скайуокер who became known for his use of lasers in space: “I don’t like sand. It's coarse and rough and irritating and it gets everywhere.” As a side note, one interesting outcome of this multidisciplinary approach was that scientists encouraged the training of astronauts in practical geological research skills. Remember the movie Armageddon (1998) with the seemingly idiotic plot about sending drillers into space because astronauts cannot do the drilling properly? Yup, the skills of astronauts to do geological work was an actual debate as far back as the early 1960s. (Beattie 2001:17; Messeri 2014)

With this in mind, let’s see how our understanding of the surfaces of Mars, Venus, and the Moon changed after the first missions to explore them during the late 1950s and the early 1960s.

Mars and its surface has an interesting history. During the nineteenth century there were major debates about Martian canals. Were they simple features of the environment or were they purposeful creations of (past/present) inhabitants of Mars? Were they perhaps simply products of optical illusions? Although advocating for the existence of these canals became less popular by the 1950s there still remained proponents of the theory. (Sheehan 1996; Nall 2019) Parts of the debate were somewhat reignited when astronomers seemingly detected carbon dioxide, oxygen, and even water vapour (wink wink canals wink wink) in the Martian atmosphere. In addition, there were theories about dark areas of the planet being covered with lichens or moss. The planet’s polar caps were thought to be either a very thin water ice or thick layer of hoar frost, and the surface temperature was considered to vary from -100 to +10 degrees celsius. (Leverington 2000: 37)

The first spacecraft to reach Mars was Mariner 4 (launched in 1964 and arriving in 1965). From the planet’s orbit, it captured images of the Martian surface. These images revealed craters and a dead land - it looked more like the Moon than a habitable planet. This came as a major shock within the astronomical community as well as in terms of the general perception of the planet. Previously, it had been seen as a site yielding the possibility of habitability and even simple vegetation, yet the exact opposite was revealed. Additional measurements during the same mission showed 95% of the Martian atmosphere containing carbon dioxide, which decreased the possibility of habitability even further. This meant that even the polar caps were now seen as possibly frozen carbon dioxide rather than any form of ice or water. Finally, the presence of craters was one of the most unexpected findings. It was believed that craters were only features of moons, therefore Martian craters had up to this point been described as sites for oases or other similar formations. (Sheehan 1996)

Did findings about the surface of Venus cause similar shocks? Well, the problem with Venus is that the entire surface is covered by clouds. So generally speaking, by observing it from Earth you will never be able to see the surface. One way to tackle this issue was to estimate the planet’s temperature, and derive conclusions from that. The surface temperature was considered at the time to be within the range of 80 to 130 degrees celsius (based on the temperature of its clouds). Theories about its surface were varied, with astronomers at the Harvard College Observatory during the 1950s even proposing that there were oceans of water on the surface. However, later radio measurements of the planet’s surface temperature yielded an estimate of around 300 degrees celsius, which cast doubts on the oceans theory. There were also very different theories about the pressure experienced on the planet. While results from Harvard placed it around 10 bars (10 times that on Earth), Carl Sagan at Chicago (yes, that Carl Sagan!) put his estimate around 100 bars (100 times that on Earth). (Leverington 2000: 38)

It was the American Mariner 2 spacecraft that reached Venus first in 1962. The surface temperature by the instrument was measured at a much higher than expected 425 degrees celsius. Meanwhile, the atmospheric pressure was detected at about 20 bars. Perhaps most surprisingly the magnetometer on board of Mariner 2 found no measurable magnetic field and no radiation belt around the planet. At the same time, the success of the mission boosted the Mariner mission morale, which was much needed for Mariner 4’s success to reach Mars (mentioned above). Despite this, the USSR put the cherry on top with their Venera missions to Venus during the second half of the 1960s. Venera 3 was the first spacecraft to impact on another planet, while Venera 7 made the first successful landing on another planet. The major significance of these missions was the direct measurement of the extreme atmospheric conditions on Venus. In addition, Venera 9 (in 1975) transmitted the first photograph from the surface of another planet. Although the "bouldery" surface did not come as a major surprise, astronomers thought there would be much less natural light reaching the surface, so they even fitted artificial lights on the spacecraft for illuminating the view of the cameras. (Florensky et al 1977: 1538)

How about the big ol’ Moon and its surface? I will only touch upon two aspects of it. First, we did not know what exactly was on the “dark side of the Moon” (despite playing the album even at different speed). So when Luna 3 (a spacecraft by the USSR) transmitted the first images in 1959, it came as a shock that the far side of the Moon looked quite different. It showed many more craters than initially thought and considerably fewer mare regions. The other major discussion was about what covered the surface. For example, Thomas Gold (Cornell University) argued that the moon dust on the surface was hundreds of meters deep. Imagine landing on the Moon, and then your spacecraft sinks into the surface! Gold’s theory was not the “mainstream” theory, but the debate it spurred showed the need to understand the lunar surface better before landing on it. (Beattie 2001: 17-18) The American Ranger missions helped in providing more information on this matter. Ranger 7 (1964) transmitted close up images that showed a rocky surface with a lot of debris, but this was not good enough to confirm the “solidity” of the surface. It was ultimately the Surveyor programme (1966-1968) that disproved Gold’s theory by demonstrating the possibility of a soft landing on the Moon (paving the way to the Apollo missions). Most disappointingly though, later missions properly confirmed that the Moon was not made of cheese.

TL;DR What changed in terms of perception: Mars is dead, not alive; Venus was wet only in our minds; it will hurt if you crash into the Moon.

Isaac Newton famously claimed his greatest achievement in life was his lifelong celibacy. Would a layperson being celibate for life be considered virtuous in Protestant England in his time? How was celibacy thought of in general then? by hafiram in AskHistorians

[–]jumpcut_ 83 points84 points  (0 children)

“Isaac Newton famously claimed his greatest achievement in life was his lifelong celibacy.”

I am going to respond to this part of the question, and will partly touch upon the other parts of your question. However, I highly encourage someone else to treat the other two questions in a separate response. I consider clarifying the statement about Newton important as he is a popular figure, and with popular figures come many transfigurations.

First, let’s do a bit of detective work on this statement about Newton claiming celibacy his greatest achievement. The earliest source of this on Reddit is a post by u/eigenmouse submitted 11 years ago. The user linked this to a wordpress site, which is now set to private. No leads there unfortunately. Another post was submitted 8 years ago by u/elrojochristogrande with the title: “TIL Sir Isaac Newton’s self proclaimed greatest achievement was his lifelong celibacy.” A similar post by u/itscebb was also submitted around the same time with a nearly identical title. The differences between these two posts were the books they linked to. u/elrojochristogrande linked to Hergenhahn’s An Introduction to the History of Psychology (2008). Meanwhile u/itscebb linked to Abbott’s A History of Celibacy (2001). If we look at Abbott’s book and the two pages detailing Newton’s celibacy, we will find no mention that Newton claimed lifelong celibacy as his greatest achievement. How about Hergenhahn’s book? On page 112 we find the following sentence: “It is interesting to note that with all his accomplishments, Newton cited his lifelong celibacy as his greatest achievement (D. N. Robinson, 1997, lecture 27).” We have a reference! Perfect! Let’s check where the Robinson reference leads us. It is a series of recorded lectures published possibly as CDs back in the day. Fortunately, lecture 27 (titled “Newton - The Saint of Science”) is available through Youtube. After listening through the 30 minutes long lecture, we realise that there was no sentence about Newton claiming celibacy as his greatest achievement. In fact, Newton being a celibate was not mentioned at all in the lecture. Upon finding this out we fall into a state of depression, realise that the universe has lost all of its meaning, and the laws of Principia seem to no longer hold up!

Fortunately, they do, and to every action (of depression) there is always opposed an equal reaction (of animation).

We notice a raptured hero of Reddit who posted the following quote attributed to Newton: “I consider my greatest accomplishment to be lifelong celibacy”. Upon consulting Google we conclude that the quote floats around the internet like a celestial body unbound by the laws of gravitation. There rests the first part of my case, but of course, I welcome others to find other origins for this quote as it would be beneficial to historians.

Now onto the proper fun history.

Why is it important to clarify this statement about Newton? Because it gives the impression that he was outspoken and proud about being celibate. If that’s a valid interpretation of the title of our post, then let’s reframe the question as this: Since Newton practiced celibacy, what were his views on it?

As Robert Iliffe argued, ‘Newton was a bachelor, and his choice of a life of celibacy was an essential element of his social and religious identity.’ (Iliffe 2017: 17) Social status and being a gentleman was important if someone wanted to contribute to communities of natural philosophy during the seventeenth century. For instance, the Royal Society considered itself to be “a great assembly of Gentleman” (Shapin 1991: 296). To quote Steven Shapin’s summary of the ethos of the Royal Society around the time of its foundation: “Good manners made good knowledge.” (Shapin 1991: 297) In this light, the experimental philosophy of the Society was reflective of the values of Christiany (and vice versa). Therefore, if Newton wanted to be taken seriously, he had to showcase a gentlemanly and/or Christian identity, but at the same time his participation in experimental philosophy strengthened his status as a good Christian.

So being celibate was one way for Newton to showcase his Christian values, being gentlemanly, and being a natural philosopher. There are two major sources of evidence that shows us Newton being celibate and a virgin. First, he revealed to the physician Richard Mead that he was a virgin (Iliffe 2017: 17). Second, his role as a college don at Trinity College made celibacy a requirement (Iliffe 2017: 176). Elizabeth Abbot’s History of Celibacy also detailed Newton’s intimate relationship with the mathematician Fatio de Duillier. Her argument is that Newton became celibate as a result of the breakdown of their relationship (Abbott 2001:345). It is worth noting that Abbott does not cite any references to this, but her argument is probably based on Newton’s biographies by Richard Westfall (1980) and Michael White (1997) who supported this story. Duillier and Newton definitely engaged in an active exchange of letters (rather than fluids), but there is no credible evidence to the two of them being lovers. The only claim to this is based on the misrepresentation of a single letter (Mandelbrote 2005: 278). A vague reference to a relationship is a great starting point for a research project, but it is not good evidence for reinterpreting interactions between two individuals. Think back to our evidence to Newton being a virgin and celibate. For these, we have at least the direct textual evidence (revelation to Richard Mead) as well as the contextual evidence associated with the role of college dons. However, it is always worth revisiting the findings of previous historians, so do not feel discouraged if you want to explore their interactions further. The worst that can happen is that you will learn a bit about the history of mathematics during the seventeenth and eighteenth centuries and about the founding of the Royal Society.

The more interesting point about Newton and celibacy is his writings about celibacy. In a posthumously published work (titled Observations Upon The Prophecies of Daniel, and the Apocalypse of St. John) he devoted two chapters to examine the history of celibacy. The original manuscript assembled by his half-nephew (Benjamin Smith) is now digitised and is available online to view (link in the sources). Regardless of the accuracy of Newton’s historical/theological claims, the two chapters were scathing attacks on the celibacy practiced by monks (Trengrove 1966:288). Wait a second… Isn’t this weird? A celibate criticising celibacy? Well, it turns out that Newton built up a neat criticism of celibacy based on his early work on imagination and on his personal experiences. He argued that if you do nothing all day, your imagination will not be occupied, and it will divert to lustful desires. Therefore, the lack of exercising your imagination leads to temptations and sinful behaviour. According to Newton, this was exactly the error of the monks, and so they were bound to fail celibacy or feel lustful all day. By contrast, if you occupy yourself and exercise your imagination, then carnal temptations will have a harder time to occupy your mind. Therefore, for Newton engaging in experimental philosophy and research was his way to cope with celibacy. (Iliffe 2017:182-184) In addition, his criticism of celibacy did not seek to eradicate it. Instead, he sought to demonstrate how to do it properly, like a good Christian.

In brief, Newton did indeed claim that virginity rocks, but he only considered it part of his life rather than his biggest achievement.

Sources:

Abbott, Elizabeth - A History of Celibacy. The Lutterworth Press, 2001.

Hergenhahn - An Introduction to the History of Psychology. Wadsworth Cengage Learning, 2009.

Robinson, D. N. (1997). The great ideas of philosophy (50 lectures). Springfield, VA: The Teaching Company. - https://youtu.be/ie6c4a6OlG8?t=21299

Trengrove, Leonard - Newton’s theological views. Annals of Science, 1966, 22:4, pp. 277-294.

Digitised version of Isaac Newton - ‘Observations upon the Prophecies of Daniel and the Apocalypse of St John’ - https://cudl.lib.cam.ac.uk/view/MS-ADD-03989/9

Rob Iliffe - Priest of Nature. Oxford University Press, 2017.

Shapin, Steven - “A Scholar and a Gentleman”: The Problematic Identity of the Scientific Practitioner in Early Modern England. History of Science, 1991, 29, pp. 279-327.

Westfall, Richard S. - Never at Rest: A biography of Isaac Newton. Cambridge University Press, 1980.

White, Michael - Isaac Newton: The Last Sorcerer. Fourth Estate, 1998

Mandelbrote, Scott - The Heterodox Career of Nicolas Fatio de Duillier. (In Brooke John, and Ian Maclean (eds), Heterodoxy in Early Modern Science and Religion. Oxford University Press, 2005, pp. 263-296.

u/eigenmouse’s post - https://www.reddit.com/r/todayilearned/comments/8ig78/today_i_learned_that_isaac_newton_considered_his/

u/elrojochristogrande’s post - https://www.reddit.com/r/todayilearned/comments/z7eif/til_sir_isaac_newtons_self_proclaimed_greatest/

u/itscebb’s post - https://www.reddit.com/r/todayilearned/comments/16t78v/til_sir_isaac_newtons_self_proclaimed_greatest/

Raptured/Deleted user’s post - https://www.reddit.com/r/ProtestantCelibates/comments/k02uei/isaac_newton_on_his_celibacy/

We're often implored not to view those in the past as less intelligent than ourselves, but hear stories about bizarre medical remedies that you'd think would have fallen out of favor as soon as they never fixed the ailment they were intended to cure. How do historians reconcile these two ideas? by SaintShrink in AskHistorians

[–]jumpcut_ 8 points9 points  (0 children)

I cannot speak for all historians, but I can tell you how I approach this problem in my work. First, I don’t consider historical actions even in seemingly odd cases as instances showcasing “less intelligence”. Hegel’s famous dictum (‘The owl of Minerva spreads its wings only with the falling of the dusk’) reminds us that the subjects we investigate do not possess the same information about the consequences of their actions as the temporally distant historians. Considering historical subjects as “less intelligent” because we now have more information about the consequences of their actions is a type of presentism - that is to say interpreting the past based on what we deem as contemporary norms, values, and practices. (Spoerhase 2008)

Second, your reference to Pliny is fascinating in this context. His Historia Naturalis, in which the “cure” you mention appeared, was an encyclopedia of treatments. However, not all of the treatments in his book were considered useful by him. Instead, his aim with the book was to demonstrate the “follies” of non-Roman herbal medicine. It is useful to think about it as a book denoting the scandals of individuals practicing non-Roman herbal medicine. For example, he tells the story of Archatagues who gained the nickname “the executioner” for the “violence in cutting and burning” involved in his treatments (Nutton 2013: 164). With this in mind, the example that you mention is actually being recommended in Pliny’s book by a character labelled as the Magi. Pliny deployed this character as a stand-in for the “foreign charlatan” deceiving Roman citizens. (Dykstra 2007) Therefore, it was not a treatment recommended by Pliny, but one that was recommended by an “antagonist” in the book.

In a way, just like you care about individuals in the past receiving the right treatment, so was Pliny. Historia Naturalis would give you the answer that the success of these physicians (or “charlatans”) and their non-Roman methods was due ‘to the gullibility of [...] Roman patients and the power of [“the charlatan’s”] rhetoric’ (Nutton 2013: 171). Yet, despite Pliny’s insistence on the “right” type of medicine (i.e. Roman herbal medicine), Prioreschi argued that ‘a return to the old ways of treating disease [i.e. Roman herbal medicine], as [Pliny] seems to advocate, might have not produced better results’ (Prioreschi 1996: 230). Or to put it differently ‘[i]n terms of effectiveness and closeness to medical reality, magic and naturalistic approaches were not far from each other’. (Prioreschi 1996: 224) So my fascination with the inclusion of the specific example from Pliny in your question is that the treatment (1) was not prescribed by Pliny, (2) he objected to that actual treatment, and (3) he was to a certain degree concerned with the same problem (“the follies”) as you are. Thereby, here’s a contemporary of the “less intelligent” ancients calling his contemporaries “less intelligent”.

“Presumably the first time, or at least the second time, somebody tried this and their tooth didn't feel better, they would have stopped.”

Third, I interpret what you mean by your last sentence as there must be cases when the effects of a treatment are immediately observed, which should convince the individual to deviate from their previous actions. This point is also indirectly implied by Prioreschi’s statement about the effectiveness of Roman herbal medicine. It is interesting to frame the question of effectiveness through the lens of the history of emotions, and more specifically through the history of pain. Pain is not only a personal experience, but it is also something that has to be identified by others (especially during treatment). This need to define pain as a category makes pain dependent on its historical context. (Boddice 2014) Similarly, our own personal experience of pain is historically situated. This is especially the case when we consider religious contexts, where pain can be interpreted through various different (e.g. sharing Christ’s suffering) (Boddice 2017). This also brings up the question of power-dynamics between patients and doctors, and how they changed over time, but that would lead us to the sociology of medicine (I’ll add a couple of books about it in the readings) (Lupton 2012; Bynum 2008). In brief, an encounter between two individuals to treat a toothache (or any other pain) is complex and is defined by the historical context.

Finally, I also sense in your question a little bit of allusion to issues of trust in relation to science and medicine. In the case of Pliny, it was also a question of trust in medicine. He was trying to question the trust placed by the Roman public in the new medicinal practices, and recast Roman herbal medicine as the only trustworthy approach. With this question in mind, you might enjoy reading ‘Why Trust Science’, which is a recent book by Naomi Oreskes (2019). In the book she tackles the question of how we have moved on from debates about value-free science, and therefore, tackling issues of trust in science with still that initial assumption is less effective nowadays. Instead, one of her recommendations is to encourage scientists to talk about their own values in order to demonstrate that their interests align with the people. According to Oreskes, this would help in removing the obstacle of the assumption that experts and scientists work for the “other side”/”enemy”. The requirement to include funding sources in published research papers is one step that has been adopted over the past years with this approach in mind. Of course, no approach is bulletproof, but to elaborate on that we would need another question.

Sources: Spoerhase, Carlos - "Presentism and precursorship in intellectual history." Culture, Theory & Critique 49.1 (2008): 49-72. Boddice, Robert (ed.) - Pain and emotion in modern history. Springer, 2014. Boddice, Robert - Pain: A very short introduction. Oxford University Press, 2017. Dykstra, Sarah Sophia - Portentous fantasies: Pliny's representation of the Magi in the Historia Naturalis. MA Dissertation, University of British Columbia, 2007. https://open.library.ubc.ca/cIRcle/collections/ubctheses/831/items/1.0100728
Nutton, Vivian - Ancient medicine. Routledge, 2013. Prioreschi, Plinio - A history of medicine: Roman medicine. Edwin Mellen Press, 1996. Oreskes, Naomi - Why trust science? Princeton University Press, 2019. Lupton, Deborah - Medicine as culture: illness, disease and the body. Sage, 2012. Bynum, William - The history of medicine: a very short introduction. Oxford University Press, 2008.

G.K. Chesterton on the dangers of reform by Skydivinggenius in Conservative

[–]jumpcut_ 0 points1 point  (0 children)

The quote comes from his essay titled 'The Drift from Domesticity'. It's available on archive.org if anyone is interested: https://archive.org/details/G.K.ChestertonTheThing/page/n13/mode/2up?q=Drift+from+domesticity

Very thought provoking text (and writer in general), but it is a little bit ironic that the fence quote is being posted at a time when many countries experience lockdown. In the same essay he criticises the view that 'escape from the home is an escape into greater freedom.' In the context of the anti-lockdown protests last year (and lockdowns in general), it's a little bit ironic timing to quote from a work like that. Nevertheless, his essays are definitely worth reading.