The source base for A Gospel of Health and Salvation is the collection of scanned periodicals produced by the Office of Archives, Statistics, and Research of the Seventh-day Adventist Church (SDA). That this collection of documents is openly available on the web has been fundamental to the success of this project. One of the greatest challenges for historical scholarship that seeks to leverage large digital collections is access to the relevant materials. While projects such as Chronicling America and resources such as the Digital Public Library of America are indispensable, many specialized resources are available only through proprietary databases and library subscriptions that impose limits on the ways scholars can interact with their resources.[^1]

The publishing of the digital periodicals on the open web by the SDA made it unnecessary for me to navigate through the firewalls (and legal land-mines) of using text from major library databases, a major boon for the digital project.[^3] And, although the site does not provide an API for accessing the documents, the structure of the pages is regular, making the site a good candidate for web scraping. However, relying on an organization to provide its own historical documents raises its own challenges. Due to the interests of the hosting organization, in this case the Seventh-day Adventist Church, the collection is shaped by and shapes a particular narrative of the denomination's history and development. For example, issues of Good Health, which was published by John Harvey Kellogg, are (almost entirely) dropped from the SDA's collection after 1907, which corresponds to the point when Kellogg was disfellowshipped from the denomination, even though Kellogg continued its publication into the 1940s.[^2] Such interests do not invalidate the usefulness of the collection, as all archives have limitations and goals, but those interests need to be acknowledged and taken into account in the analysis.

To determine the list of titles that applied to my time and regions of study, I browsed through all of the titles in the periodicals section of the site and compiled a list of titles that fit my geographic and temporal constraints. These are:

As this was my first technical task for the dissertation, my initial methods for identifying the URLs for the documents I wanted to download was rather manual. I saved an .html file for each index page that contained documents I wanted to download. I then passed those .html files to a script (similar to that recorded here) that used BeautifulSoup to extract the PDF ids, reconstruct the URLs, and write the URLs to a new text file, scrapeList.txt. After manually deleting the URLs to any documents that were out of range, I then passed the scrapeList.txt file to wget using the following syntax:[^4]

wget -i scrapeList.txt -w 2 --limit-rate=200k


I ran this process for each of the periodical titles included in this study. It took approximately a week to download all 13,000 files to my local machine. The resulting corpus takes up 27.19 GB of space.

This notebook reflects a more automated version of that process, created in 2017 to download missing documents. The example recorded here is for downloading the Sabbath School Quarterly collection, which I missed during my initial collection phase.

In these scripts I use the requests library to retrieve the HTML from the document directory pages and BeautifulSoup4 to locate the filenames. Finally, I use requests to download the files.

In [3]:
from bs4 import BeautifulSoup
from os.path import join
import re
import requests

In [4]:
def check_year(pdfID):
"""Use regex to check the year from the PDF filename.

Args:
pdfID (str): The filename of the PDF object, formatted as
PREFIXYYYYMMDD-V00-00
"""
split_title = pdfID.split('-')
title_date = split_title[0]
date = re.findall(r'[0-9]+', title_date)
year = date[0][:4]
if int(year) < 1921:
return True
else:
return False

def filename_from_html(content):
"""Use BeautifulSoup to extract the PDF ids from the HTML page.

This script is customized to the structure of the archive pages at

Args:
content (str): Content is retrieved from a URL using the get_html_page
function.
"""
soup = BeautifulSoup(content, "lxml")
buttons = soup.find_all('td', class_="ms-vb-title")

pdfIDArray = []

for each in buttons:
pdfIDArray.append(pdfID)

return pdfIDArray

def get_html_page(url):
"""Use the requests library to get HTML content from URL

Args:
"""
r = requests.get(url)

return r.text


The first step is to set the directory where I want to save the downloaded documents, as well as the root URL for the location of the PDF documents.

This example is set up for the Sabbath School Quarterly.

In [6]:
"""If running locally, you will need to create the corpus folder or
update the path to the location of your choice.
"""


My next step is to generate a list of the IDs for the documents I want to download.

In [7]:
index_page_urls = ["http://documents.adventistarchives.org/SSQ/Forms/AllItems.aspx?View={44c9b385-7638-47af-ba03-cddf16ec3a94}&SortField=DateTag&SortDir=Asc",
]

In [8]:
docs_to_download = []

for url in index_page_urls:
content = get_html_page(url)
pdfs = filename_from_html(content)

for pdf in pdfs:
if check_year(pdf):
else:
pass

Adding SS18880101-01 to download list


Finally, I loop through all of the filenames, create the URL to the PDF, and use requests to download a copy of the document into my directory for processing.

In [11]:
for doc_name in docs_to_download:
url = join(baseurl, "{}.pdf".format(doc_name))
print(url)
# get request
response = requests.get(url)
# write to file
file.write(response.content)

http://documents.adventistarchives.org/SSQ/SS18880101-01.pdf