The Emergent Beauty of Mathematics

The first 30 seconds of a Brownian tree (code can be found here).

Like a flower in full bloom, nature reveals its patterns shaped by mathematics. Or as particles collide with one another, they create snowflake-like fractals through emergence. How do fish swarm in schools or consciousness come about from the brain? Simulations can provide answers.

Through code, you can simulate how living cells or physical particles would interact with one another. Using equations that govern the behavior of how cells act when they meet one another or how they would grow and evolve into larger structures. In the gif above, you can use diffusion-limited aggregation to create Brownian trees. These are the structures that emerge when particles move randomly with respect to one another. Particles in fluid (like dropping color dye into water) take these patterns when you look at them under a microscope. As the particles collide and form trees, they create shapes and patterns like water crystals on glass. These visuals can give you a way of appreciating how beautiful mathematics is. The way mathematical theory can borrow from nature and how biological communities of living organisms themselves depend on physical laws shows how such an interdisciplinary approach provides a way to bridge different disciplines.

After about 20 minutes, the branches of the Brownian tree take form.

In the code, the particles are set to move with random velocities in two dimensions and, if they collide with the tree (a central particle at the beginning), they form parts of the tree. As the tree grows bigger over time, it takes the shapes of branches the same way neurons in the brain form trees that send signals between one another. These fractals, in their uniqueness, give them a kind of mathematical beauty.

Conway’s game of life represents another way something emerges from randomness.

Flashing lights coming and going away like stars shining in the sky are more than just randomness. These models of cellular interactions are known as cellular automaton. The gif above shows an example of Conway’s game of life, a simulation of how living cells interact with one another.

These cells “live” and “die” according to four simple rules: (1) live cells with fewer than two live neighbors die, as if by underpopulation, (2) live cells with two or three live neighbors live on to the next generation, (3) live cells with more than three live neighbors die, as if by overpopulation and (4) dead cells with exactly three live neighbors become live cells, as if by reproduction.

Conus textile shows a similar cellular automaton pattern on its shell.

Through these rules, specific shapes emerge such as “gliders” or “knightships” you can further describe with rules and equations. You’ll find natural versions of cells obeying rules like the colorful patterns on a seashell. Complex structures emerging from more basic, fundamental sets of rules unite these observations. While the beauty of these structures becomes more and more apparent from the patterns between different disciplines, searching for these patterns in other contexts can be more difficult such as human behavior.

Recent writing like Genesis: The Deep Origin of Societies by biologist E.O. Wilson take on the debate over how altruism in humans evolved. While the shape of snowflakes can emerge from the interactions between water molecules, humans getting along with one another seems far more complicated and higher-level. Though you can find similar cooperation in ants and termites creating societies, how did science let this happen?

Biologists have answered that organisms choose to mate with individuals and increase the survival chances of themselves and their offspring while passing on their genes. Though they’ve argued this for decades, Wilson offers a contrary point of view. In groups, selfish organisms defeat altruistic ones, but altruistic groups beat selfish groups overall. This group selection drives the emergence of altruism. Through these arguments, both sides have appealed to the mathematics of nature, showing its growing importance in recognizing the patterns of life.

Wilson clarifies that data analysis and mathematical modeling should come second to the biology itself. Becoming experts on organisms themselves should be a priority. Regardless of what form it takes, the beauty is still there, even if it’s below the surface.

How to Create Interactive Network Graphs (from Twitter or elsewhere) with Python

In this post, a gentle introduction to different Python packages will let you create network graphs users can interact with. Taking a few steps into graph theory, you can apply these methods to anything from links between the severity of terrorist attacks or the prices of taxi cabs. In this tutorial, you can use information from Twitter to make graphs anyone can appreciate.

The code for steps 1 and 2 can be found on GitHub here, and the code for the rest, here.

Table of contents:

  1. Get Started
  2. Extract Tweets and Followers
  3. Process the Data
  4. Create the Graph
  5. Evaluate the Graph
  6. Plot the Map

1. Get Started

Make sure you’re familiar with using a command line interface such as Terminal and you can download the necessary Python packages (chart-studio, matplotlib, networkx, pandas, plotly and python-twitter). You can use Anaconda to download them. This tutorial will introduce parts of the script you can run from the command line to extract tweets and visualize them.

If you don’t have a Twitter developer account, you’ll need to login here and get one. Then create an app and find your keys and secret codes for the consumer and access tokens. This lets you extract information from Twitter.

2. Extract Tweets and Followers

To extract Tweets, run the script below. In this example, the tweets of the UCSC Science Communication class of 2020 are analyzed (in screennames) so their Twitter handles are used. Replace the variables currently defined as None below with them. Keep these keys and codes safe and don’t share them with others. Set datadir to the output directory to store the data.

The code begins with import statements to use the required packages including json and os, which should come installed with Python.

import json
import os
import pickle
import twitter 

screennames = ["science_ari", "shussainather", "laragstreiff",                  "scatter_cushion", "jessekathan", "jackjlee",                 "erinmalsbury", "joetting13", "jonathanwosen",                 "heysmartash"] 

CONSUMER_KEY = None
CONSUMER_SECRET = None
ACCESS_TOKEN_KEY = None
ACCESS_TOKEN_SECRET = None

datadir = "data/twitter"

Extract the information we need. This code goes through each screen name and accesses their tweet and follower information. It then saves the data of both of them to output JSON and pickle files.

t = twitter.Api(consumer_key = CONSUMER_KEY,
consumer_secret = CONSUMER_SECRET,
access_token_key = ACCESS_TOKEN_KEY,
access_token_secret = ACCESS_TOKEN_SECRET)
for sn in screennames:
"""
For each user, get the followers and tweets and save them
to output pickle and JSON files.
"""
fo = datadir + "/" + sn + ".followers.pickle"
# Get the follower information.
fof = t.GetFollowers(screen_name = sn)
with open(fo, "w") as fofpickle:
pickle.dump(fof, fofpickle, protocol = 2)
with open(fo, "r") as fofpickle:
with open(fo.replace(".pickle", ".json"), "w") as fofjson:
fofdata = pickle.load(fofpickle)
json.dump(fofdata, fofjson) # Get the user's timeline with the 500 most recent tweets.
timeline = t.GetUserTimeline(screen_name=sn, count=500)
tweets = [i.AsDict() for i in timeline]
with open(datadir + "/" + sn + ".tweets.json", "w") as tweetsjson:
json.dump(tweets, tweetsjson) # Store the informtion in a JSON.

This should extract the follower and tweets and save them to pickle and JSON files in the datadir.

3. Process the Data

Now that you have an input JSON file of tweets, you can set it to the tweetsjson variable in the code below to read it as a pandas DataFrame.

For the rest of the tutorial, start a new script for convenience.

import json
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
import re

from plotly.offline import iplot, plot
from operator import itemgetter

Use pandas to import the JSON file as a pandas DataFrame.

df = pd.read_json(tweetsjson)

Set tfinal as the final DataFrame to make.

tfinal = pd.DataFrame(columns = ["created_at", "id", "in_reply_to_screen_name", "in_reply_to_status_id", "in_reply_to_user_id", "retweeted_id", "retweeted_screen_name", "user_mentions_screen_name", "user_mentions_id", "text", "user_id", "screen_name", "followers_count"])

Then, extract the columns you’re interested in and add them to tfinal.

eqcol = ["created_at", "id", "text"]
tfinal[eqcol] = df[eqcol]
tfinal = filldf(tfinal)
tfinal = tfinal.where((pd.notnull(tfinal)), None)

Use the following functions to extract information from them. Each function extracts information form the input df DataFrame and adds it to the tfinal one.

First, get the basic information: screen name, user ID and how many followers.

def getbasics(tfinal):
"""
Get the basic information about the user.
"""
tfinal["screen_name"] = df["user"].apply(lambda x: x["screen_name"])
tfinal["user_id"] = df["user"].apply(lambda x: x["id"])
tfinal["followers_count"] = df["user"].apply(lambda x: x["followers_count"])
return tfinal

Then, get information on which tweets have been retweeted.

def getretweets(tfinal):
"""
Get retweets.
"""
# Inside the tag "retweeted_status" will find "user" and will get "screen name" and "id".
tfinal["retweeted_screen_name"] = df["retweeted_status"].apply(lambda x: x["user"]["screen_name"] if x is not np.nan else np.nan)
tfinal["retweeted_id"] = df["retweeted_status"].apply(lambda x: x["user"]["id_str"] if x is not np.nan else np.nan)
return tfinal

Figure out which tweets are replies and to who they are replying.

def getinreply(tfinal):
"""
Get reply info.
"""
# Just copy the "in_reply" columns to the new DataFrame.
tfinal["in_reply_to_screen_name"] = df["in_reply_to_screen_name"]
tfinal["in_reply_to_status_id"] = df["in_reply_to_status_id"]
tfinal["in_reply_to_user_id"]= df["in_reply_to_user_id"]
return tfinal

The following function runs each of these functions to get the information into tfinal.

def filldf(tfinal):
"""
Put it all together.
"""
getbasics(tfinal)
getretweets(tfinal)
getinreply(tfinal)
return tfinal

You’ll use this getinteractions() function in the next step when creating the graph. This takes the actual information from the tfinal DataFrame and puts it into the format that a graph can use.

def getinteractions(row): """ Get the interactions between different users. """ # From every row of the original DataFrame. # First we obtain the "user_id" and "screen_name". user = row["user_id"], row["screen_name"] # Be careful if there is no user id. if user[0] is None: return (None, None), []

For the remainder of the for loop, get the information if it’s there.

    # The interactions are going to be a set of tuples.
    interactions = set()

    # Add all interactions. 
    # First, we add the interactions corresponding to replies adding 
    # the id and screen_name.
    interactions.add((row["in_reply_to_user_id"], 
    row["in_reply_to_screen_name"]))
    # After that, we add the interactions with retweets.
    interactions.add((row["retweeted_id"], 
    row["retweeted_screen_name"]))
    # And later, the interactions with user mentions.
    interactions.add((row["user_mentions_id"], 
    row["user_mentions_screen_name"]))

    # Discard if user id is in interactions.
    interactions.discard((row["user_id"], row["screen_name"]))
    # Discard all not existing values.
    interactions.discard((None, None))
    # Return user and interactions.
    return user, interactions

4. Create the Graph

Initialize the graph with networkx.

graph = nx.Graph()

Loop through the tfinal DataFrame and get the interaction information. Use the getinteractions function to get each user and interaction involved with each tweet.

for index, tweet in tfinal.iterrows():
user, interactions = getinteractions(tweet)
user_id, user_name = user
tweet_id = tweet["id"]
for interaction in interactions:
int_id, int_name = interaction
graph.add_edge(user_id, int_id, tweet_id=tweet_id)
graph.node[user_id]["name"] = user_name
graph.node[int_id]["name"] = int_name

5. Evaluate the Graph

In the field of social network analysis (SNA), researchers use measurements of nodes and edges to tell what graphs re like. This lets you separate the signal from noise when looking at network graphs.

First, look at the degrees and edges of the graph. The print statements should print out the information about these measurements.

degrees = [val for (node, val) in graph.degree()]
print("The maximum degree of the graph is " + str(np.max(degrees)))
print("The minimum degree of the graph is " + str(np.min(degrees)))
print("There are " + str(graph.number_of_nodes()) + " nodes and " + str(graph.number_of_edges()) + " edges present in the graph")
print("The average degree of the nodes in the graph is " + str(np.mean(degrees)))

Are all the nodes connected?

if nx.is_connected(graph):
print("The graph is connected")
else:
print("The graph is not connected")
print("There are " + str(nx.number_connected_components(graph)) + " connected in the graph.")

Information about the largest subgraph can tell you what sort of tweets represent the majority.

largestsubgraph = max(nx.connected_component_subgraphs(graph), key=len)
print("There are " + str(largestsubgraph.number_of_nodes()) + " nodes and " + str(largestsubgraph.number_of_edges()) + " edges present in the largest component of the graph.")

The clustering coefficient tells you how close together the nodes congregate using the density of the connections surrounding a node. If many nodes are connected in a small area, there will be a high clustering coefficient.

print("The average clustering coefficient is " + str(nx.average_clustering(largestsubgraph)) + " in the largest subgraph")
print("The transitivity of the largest subgraph is " + str(nx.transitivity(largestsubgraph)))
print("The diameter of our graph is " + str(nx.diameter(largestsubgraph)))
print("The average distance between any two nodes is " + str(nx.average_shortest_path_length(largestsubgraph)))

Centrality tells you how many direct, “one step,” connections each node has to other nodes in the network, and there are two ways to measure it. “Betweenness centrality” represents which nodes act as “bridges” between nodes in a network by finding the shortest paths and counting how many times each node falls on one. “Closeness centrality,” instead, scores each node based on the sum of the shortest paths.

graphcentrality = nx.degree_centrality(largestsubgraph)
maxde = max(graphcentrality.items(), key=itemgetter(1))
graphcloseness = nx.closeness_centrality(largestsubgraph)
graphbetweenness = nx.betweenness_centrality(largestsubgraph, normalized=True, endpoints=False)
maxclo = max(graphcloseness.items(), key=itemgetter(1))
maxbet = max(graphbetweenness.items(), key=itemgetter(1))

print("The node with ID " + str(maxde[0]) + " has a degree centrality of " + str(maxde[1]) + " which is the max of the graph.")
print("The node with ID " + str(maxclo[0]) + " has a closeness centrality of " + str(maxclo[1]) + " which is the max of the graph.")
print("The node with ID " + str(maxbet[0]) + " has a betweenness centrality of " + str(maxbet[1]) + " which is the max of the graph.")

6. Plot the Map

Get the edges and store them in lists Xe and Ye in the x- and y-directions.

Xe=[]
Ye=[]
for e in G.edges():
Xe.extend([pos[e[0]][0], pos[e[1]][0], None])
Ye.extend([pos[e[0]][1], pos[e[1]][1], None])

Define the Plotly “trace” for nodes and edges. Plotly uses these traces as a way of storing the graph data right before it’s plotted.

trace_nodes = dict(type="scatter",
                 x=Xn, 
                 y=Yn,
                 mode="markers",
                 marker=dict(size=28, color="rgb(0,240,0)"),
                 text=labels,
                 hoverinfo="text")

trace_edges = dict(type="scatter",                  
                 mode="lines",                  
                 x=Xe,                  
                 y=Ye,                 
                 line=dict(width=1, color="rgb(25,25,25)"),                                         hoverinfo="none")

Plot the graph with the Fruchterman-Reingold layout algorithm. This image shows an example of a graph plotted with this algorithm, designed to provide clear, explicit ways the nodes are connected.

The force-directed Fruchterman-Reingold algorithm to draw nodes in an understandable way.

pos = nx.fruchterman_reingold_layout(G)

Use the axis and layout variables to customize what appears on the graph. Using the showline=False, option, you will hide the axis line, grid, tick labels and title of the graph. Then the fig variable creates the actual figure.

axis = dict(showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
title=""
)


layout = dict(title= "My Graph",
font= dict(family="Balto"),
width=600,
height=600,
autosize=False,
showlegend=False,
xaxis=axis,
yaxis=axis,
margin=dict(
l=40,
r=40,
b=85,
t=100,
pad=0,
),
hovermode="closest",
plot_bgcolor="#EFECEA", # Set background color.
)


fig = dict(data=[trace_edges, trace_nodes], layout=layout)

Annotate with the information you want others to on each node. Use the labels variable to list (with the same length as pos) what should appear as an annotation.

labels = range(len(pos))

def make_annotations(pos, anno_text, font_size=14, font_color="rgb(10,10,10)"):
L=len(pos)
if len(anno_text)!=L:
raise ValueError("The lists pos and text must have the same len")
annotations = []
for k in range(L):
annotations.append(dict(text=anno_text[k],
x=pos[k][0],
y=pos[k][1]+0.075,#this additional value is chosen by trial and error
xref="x1", yref="y1",
font=dict(color= font_color, size=font_size),
showarrow=False)
)
return annotations
fig["layout"].update(annotations=make_annotations(pos, labels))

Finally, plot.

iplot(fig)

An example graph

Make a word cloud in a single line of Python

Moby-Dick, visualized

This is a concise way to make a word cloud using Python. It can teach you basics of coding while creating a nice graphic.

It’s actually four lines of code, but making the word cloud only takes one line, the final one.

import nltk
from wordcloud import WordCloud
nltk.download("stopwords")
WordCloud(background_color="white", max_words=5000, contour_width=3, contour_color="steelblue").generate_from_text(" ".join([r for r in open("mobydick.txt", "r").read().split() if r not in set(nltk.corpus.stopwords.words("english"))])).to_file("wordcloud.png")

Just tell me what to do now!

The first two lines lines specify the required packages you must download with these links: nltk and wordcloud. You may also try these links: nltk and wordcloud to download them. The third line downloads the stop words (common words like “the”, “a” and “in”) that you don’t want in your word cloud.

The fourth line is complicated. Calling the WordCloud() method, you can specify the background color, contour color and other options (found here). generate_from_text() takes a string of words to put in the word cloud.

The " ".join() creates this string of words separated by spaces from a list of words. The for loop in the square brackets[] creates this list of each word from the input file (in this case, mobydick.txt) with the r variable letting you use each word one at a time in the list.

The input file is open(), read() and split() into its words under the condition (using if) they aren’t in nltk.corpus.stopwords.words("english"). Finally, to_file() saves the image as wordcloud.png.

How to use this code

In the code, change "mobydick.txt" to the name of your text file (keep the quotation marks). Save the code in a file makewordcloud.py in the text file’s directory, and use a command line interface (such as Terminal) to navigate to the directory.

Run your script using python makewordcloud.py, and check out your wordcloud.png!

Global Mapping of Critical Minerals

The periodic table below illustrates the global abundance of critical minerals in the Earth’s crust in parts per million (ppm). Hover over each element to view! Lanthanides and actinides are omitted due to lack of available data.

Data is obtained from the USGS handbook “Critical Mineral Resources of the United States— Economic and Environmental Geology and Prospects for Future Supply.” The code used is found here.

Bokeh Plot

Because these minerals tend to concentrate in specific countries like niobium in Brazil or antimony in China and remain central to many areas of society such as national defense or engineering, governments like the US have come forward with listing these minerals as “critical.”

The abundance across different continents is shown in the map above.

You can find gallium, the most abundant of the critical minerals, in place of aluminum and zinc, elements smaller than gallium. Processing bauxite ore or sphalerite ore (from the sediment-hosted, Mississippi Valley-type and volcanogenic massive sulfide) of zinc yield gallium. The US meets its gallium needs through primary, recycled and refined forms of the element.

German and indium have uses in electronics, flat-panel display screens, light-emitting diodes (LEDs) and solar power arrays. China, Belgium, Canada, Japan and South Korea are the main producers of indium while germanium production can be more complicated. In many cases, countries import primary germanium from other ones, such as Canada importing from the US or Finland from the Democratic Republic of the Congo, to recover them.

Rechargable battery cathodes and jet aircraft turbine engines make use of cobalt. While the element is the central atom in vitamin B12, excess and overexposure can cause lung and heart dysfunction and dermatitis.

As one of only three countries that processes beryllium into products, the US doesn’t put much time or money into exploring for new deposits within its own borders because a single producer dominates the domestic berllyium market. Beryllium finds uses in magnetic resonance imaging (MRI) and medical lasers.

A Deep Learning Overview with Python

This course proposes a quick introduction to deep learning and two of its major networks, convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The purpose is to give an intuitive sense of how to implement deep learning approaches for various tasks. To use this iPython notebook, run the python code in separate files for each cell. The content below each cell of this notebook is the output for running those cells.

Simple perceptron

In [1]:
import numpy as np

# sigmoid function
def sigmoid(x,deriv=False):
    if(deriv==True):
        return x*(1-x)
    return 1/(1+np.exp(-x))
    
# input dataset
X = np.array([[0,0,1],
              [0,1,1],
              [1,0,1],
              [1,1,1]])
    
# output dataset            
y = np.array([[0,0,1,1]]).T

# seed random numbers to make calculation
# deterministic (just a good practice)
np.random.seed(1)

# initialize weights randomly with mean 0
syn0 = 2*np.random.random((3,1)) - 1

for j in range(100000):

    # forward propagation
    l0 = X
    l1 = sigmoid(np.dot(l0,syn0))

    # how much did we miss?
    l1_error = y - l1
    if (j% 10000) == 0:
        print("Error:" + str(np.mean(np.abs(l1_error))))

    # multiply how much we missed by the 
    # slope of the sigmoid at the values in l1
    l1_delta = l1_error * sigmoid(l1,True)

    # update weights
    syn0 += np.dot(l0.T,l1_delta)

print()
print("Prediction after Training:")
print(l1)
Error:0.517208275438
Error:0.00795484506673
Error:0.0055978239634
Error:0.00456086918013
Error:0.00394482243339
Error:0.00352530883742
Error:0.00321610234673
Error:0.00297605968522
Error:0.00278274003022
Error:0.0026227273927

Prediction after Training:
[[ 0.00301758]
 [ 0.00246109]
 [ 0.99799161]
 [ 0.99753723]]

What is the loss function here? How is it calculated?

Any idea how it would perform on non-linearly separable data? How could we test it?

Multilayer perceptron

Let’s use the fact that the sigmoid is differenciable (while the step function we saw in the slides is not). This allows us to add more layers (hence more modelling power).

In [2]:
import numpy as np

def sigmoid(x,deriv=False):
	if(deriv==True):
	    return x*(1-x)

	return 1/(1+np.exp(-x))
    
X = np.array([[0,0,1],
              [0,1,1],
              [1,0,1],
              [1,1,1]])
                
y = np.array([[0],
			  [1],
			  [1],
			  [0]])

np.random.seed(1)

# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1

for j in range(100000):

	# Feed forward through layers 0, 1, and 2
    l0 = X
    l1 = sigmoid(np.dot(l0,syn0))
    l2 = sigmoid(np.dot(l1,syn1))

    # how much did we miss the target value?
    l2_error = y - l2
    
    if (j% 10000) == 0:
        print("Error:" + str(np.mean(np.abs(l2_error))))
        
    # in what direction is the target value?
    # were we really sure? if so, don't change too much.
    l2_delta = l2_error*sigmoid(l2,deriv=True)

    # how much did each l1 value contribute to the l2 error (according to the weights)?
    l1_error = l2_delta.dot(syn1.T)
    
    # in what direction is the target l1?
    # were we really sure? if so, don't change too much.
    l1_delta = l1_error * sigmoid(l1,deriv=True)

    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)
    
print()
print(l2)
Error:0.496410031903
Error:0.00858452565325
Error:0.00578945986251
Error:0.00462917677677
Error:0.00395876528027
Error:0.00351012256786
Error:0.00318350238587
Error:0.00293230634228
Error:0.00273150641821
Error:0.00256631724004

[[ 0.00199094]
 [ 0.99751458]
 [ 0.99771098]
 [ 0.00294418]]

Setting up the environment

We have done toy examples for feedforward networks. Things quickly become complicated, so let’s go deeper by relying on high-level frameworks: TensorFlow and Keras. Most technicalities are thus avoided so that you can directly play with networks.

In [ ]:
!conda install tensorflow keras
In [3]:
import tensorflow as tf
import keras
/Users/syedather/.local/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/Users/syedather/.matplotlib/matplotlibrc", line #2
  (fname, cnt))
Using TensorFlow backend.
In [4]:
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
b'Hello, TensorFlow!'

CNNs

We are going to use the MNIST dataset for our first task. The code below loads the dataset and shows one training example and its label.

In [5]:
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from pylab import *

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

print("The first training instance is labeled as: "+str(y_train[0]))
The first training instance is labeled as: 5
In [6]:
figure(1)
imshow(x_train[0], interpolation='nearest')
Out[6]:
<matplotlib.image.AxesImage at 0x1259b2320>

Now study the following code. What is the network we use? How many layers? What hyper parameters?

In [7]:
# Setup some hyper parameters
batch_size = 128
num_classes = 10
epochs = 15

# input image dimensions
img_rows, img_cols = 28, 28

# This is some technicality regarding Keras' dataset
if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

# We convert the matrices to floats as we will use real numbers
x_train = x_train.astype('float32')[:1000]
x_test = x_test.astype('float32')[:200]
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)[:1000]
y_test = keras.utils.to_categorical(y_test, num_classes)[:200]


# Build network
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

# Train
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))

# Evaluate on test data
score = model.evaluate(x_test, y_test, verbose=0)
print()
print('Test loss:', score[0])
print('Test accuracy:', score[1])

# Evaluate on training data
score = model.evaluate(x_train, y_train, verbose=0)
print()
print('Train loss:', score[0])
print('Train accuracy:', score[1])
x_train shape: (1000, 28, 28, 1)
1000 train samples
200 test samples
Train on 1000 samples, validate on 200 samples
Epoch 1/15
1000/1000 [==============================] - 4s 4ms/step - loss: 1.7244 - acc: 0.5660 - val_loss: 0.9116 - val_acc: 0.7900
Epoch 2/15
1000/1000 [==============================] - 4s 4ms/step - loss: 0.5967 - acc: 0.8320 - val_loss: 0.5148 - val_acc: 0.8100
Epoch 3/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.4394 - acc: 0.8670 - val_loss: 0.3056 - val_acc: 0.8600
Epoch 4/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.3296 - acc: 0.9050 - val_loss: 0.3263 - val_acc: 0.9000
Epoch 5/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.2205 - acc: 0.9360 - val_loss: 0.2092 - val_acc: 0.9200
Epoch 6/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.1684 - acc: 0.9560 - val_loss: 0.1870 - val_acc: 0.9450
Epoch 7/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.1325 - acc: 0.9690 - val_loss: 0.1597 - val_acc: 0.9350
Epoch 8/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0990 - acc: 0.9740 - val_loss: 0.1617 - val_acc: 0.9400
Epoch 9/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0636 - acc: 0.9840 - val_loss: 0.1434 - val_acc: 0.9450
Epoch 10/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0393 - acc: 0.9960 - val_loss: 0.1545 - val_acc: 0.9400
Epoch 11/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0267 - acc: 0.9950 - val_loss: 0.1444 - val_acc: 0.9400
Epoch 12/15
1000/1000 [==============================] - 4s 4ms/step - loss: 0.0158 - acc: 1.0000 - val_loss: 0.1642 - val_acc: 0.9350
Epoch 13/15
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0090 - acc: 1.0000 - val_loss: 0.1475 - val_acc: 0.9450
Epoch 14/15
1000/1000 [==============================] - 4s 4ms/step - loss: 0.0057 - acc: 1.0000 - val_loss: 0.1556 - val_acc: 0.9350
Epoch 15/15
1000/1000 [==============================] - 4s 4ms/step - loss: 0.0041 - acc: 1.0000 - val_loss: 0.1651 - val_acc: 0.9350

Test loss: 0.165074422359
Test accuracy: 0.935

Train loss: 0.00311407446489
Train accuracy: 1.0

Is there anything wrong here?

How do you think a linear classifier performs?

In [8]:
# Setup some hyper parameters
batch_size = 128
num_classes = 10
epochs = 15

# input image dimensions
img_rows, img_cols = 28, 28

# This is some technicality regarding Keras' dataset
if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

# We convert the matrices to floats as we will use real numbers
x_train = x_train.astype('float32')[:1000]
x_test = x_test.astype('float32')[:200]
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)[:1000]
y_test = keras.utils.to_categorical(y_test, num_classes)[:200]


# Build network
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

# Train
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))

# Evaluate on test data
score = model.evaluate(x_test, y_test, verbose=0)
print()
print('Test loss:', score[0])
print('Test accuracy:', score[1])

# Evaluate on training data
score = model.evaluate(x_train, y_train, verbose=0)
print()
print('Train loss:', score[0])
print('Train accuracy:', score[1])
x_train shape: (1000, 28, 28, 1)
1000 train samples
200 test samples
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-8-a1470fe28059> in <module>()
     53           epochs=epochs,
     54           verbose=1,
---> 55           validation_data=(x_test, y_test))
     56 
     57 # Evaluate on test data

~/anaconda3/lib/python3.6/site-packages/keras/models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
    961                               initial_epoch=initial_epoch,
    962                               steps_per_epoch=steps_per_epoch,
--> 963                               validation_steps=validation_steps)
    964 
    965     def evaluate(self, x=None, y=None,

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
   1628             sample_weight=sample_weight,
   1629             class_weight=class_weight,
-> 1630             batch_size=batch_size)
   1631         # Prepare validation data.
   1632         do_validation = False

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
   1478                                     output_shapes,
   1479                                     check_batch_axis=False,
-> 1480                                     exception_prefix='target')
   1481         sample_weights = _standardize_sample_weights(sample_weight,
   1482                                                      self._feed_output_names)

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    111                         ': expected ' + names[i] + ' to have ' +
    112                         str(len(shape)) + ' dimensions, but got array '
--> 113                         'with shape ' + str(data_shape))
    114                 if not check_batch_axis:
    115                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_4 to have 2 dimensions, but got array with shape (1000, 10, 10)

Let’s use this model to predict a value for the first training instance we vizualized.

In [ ]:
print(model.predict(np.expand_dims(x_train[0], axis=0)))

Is the model correct here? What is the output of the network?

RNNs

We will now switch to RNNs. These require more resources, so we can’t do the fanciest applications during the workshop. We will do some sentiment classification of movie reviews.

In [9]:
from __future__ import print_function
import numpy as np
import keras
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb

# Number of considered words, based on frequencies
max_features = 20000
# cut texts after this number of words
maxlen = 100
batch_size = 32

print('Loading data...')
(x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=max_features, index_from=3)

# This is just for pretty printing the sentences...
word_to_id = keras.datasets.imdb.get_word_index()
word_to_id = {k:(v+3) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
id_to_word = {value:key for key,value in word_to_id.items()}

print("Here's the input for the first training instance:")
print(' '.join(id_to_word[id] for id in x_train[0] ))
Loading data...
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 2s 0us/step
Downloading data from https://s3.amazonaws.com/text-datasets/imdb_word_index.json
1646592/1641221 [==============================] - 0s 0us/step
Here's the input for the first training instance:
<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for retail and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also congratulations to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the praising list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all

What do you think about this text? Is it a positive or negative review?

In [10]:
print("Here are the dataset shapes")
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')

print("And the input for the first instance is represented as:")
print(x_train[0])
Here are the dataset shapes
25000 train sequences
25000 test sequences
And the input for the first instance is represented as:
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

What do these numbers represent? Is there any limitation you can imagine coming from this?

In [11]:
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)[:5000]
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)[:5000]
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)[:5000]
y_test = np.array(y_test)[:5000]

model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))

model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])

print('Train...')
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=4,
          validation_data=[x_test, y_test])
Pad sequences (samples x time)
x_train shape: (5000, 100)
x_test shape: (5000, 100)
Train...
Train on 5000 samples, validate on 5000 samples
Epoch 1/4
5000/5000 [==============================] - 54s 11ms/step - loss: 0.6032 - acc: 0.6570 - val_loss: 0.4283 - val_acc: 0.8056
Epoch 2/4
5000/5000 [==============================] - 54s 11ms/step - loss: 0.2761 - acc: 0.8918 - val_loss: 0.4403 - val_acc: 0.7948
Epoch 3/4
5000/5000 [==============================] - 61s 12ms/step - loss: 0.1101 - acc: 0.9670 - val_loss: 0.6366 - val_acc: 0.8026
Epoch 4/4
5000/5000 [==============================] - 56s 11ms/step - loss: 0.0478 - acc: 0.9868 - val_loss: 0.6637 - val_acc: 0.7954
Out[11]:
<keras.callbacks.History at 0x1392d76d8>
In [12]:
print("The neural net predicts that the first instance sentiment is:")
print(model.predict(np.expand_dims(x_train[0], axis=0)))
The neural net predicts that the first instance sentiment is:
[[ 0.99445081]]

Remarks? Comments?

How do the training scores compare to the test scores? How can we improve this? What are the current limitations?

This RNN use case takes more time to train but it is definitely more impressive. We will model the language, by training on a novel. For each (set of) word(s) in the novel, the objective is to predict the following word. This can be done on any text, and we don’t need annotated data – the text itself is enough.

Have a look at the following piece of code and try to understand what it does. Then, run it and see the network generating text! At first, the output is not meaningful, but it becomes so over time. This is the magic I was referring to.

Beware: this will take longer to run on a CPU. A GPU is recommended, but you can still try to run it for a while to see the predictions evolve. On my laptop, an epoch takes 6mins so the full training takes 6hrs. About 20 epochs are required for the generated text to be somewhat meaningful.

Note, however, that although this seems long, training actual deep learning models for concrete tasks takes days, even on multiple GPUs. This is mostly because of the data size and the much deeper networks.

In [ ]:
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import io

# We load a text from Nietzsche
path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
with io.open(path, encoding='utf-8') as f:
    text = f.read().lower()
print('corpus length:', len(text))

# We create dictionaries of character > index and the other way around
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))

# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
    sentences.append(text[i: i + maxlen])
    next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))

print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
    for t, char in enumerate(sentence):
        x[i, t, char_indices[char]] = 1
    y[i, char_indices[next_chars[i]]] = 1


# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))

optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)


def sample(preds, temperature=1.0):
    # helper function to sample an index from a probability array
    preds = np.asarray(preds).astype('float64')
    preds = np.log(preds) / temperature
    exp_preds = np.exp(preds)
    preds = exp_preds / np.sum(exp_preds)
    probas = np.random.multinomial(1, preds, 1)
    return np.argmax(probas)


def on_epoch_end(epoch, logs):
    # Function invoked at end of each epoch. Prints generated text.
    print()
    print('----- Generating text after Epoch: %d' % epoch)

    start_index = random.randint(0, len(text) - maxlen - 1)
    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print('----- diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('----- Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x_pred = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x_pred[0, t, char_indices[char]] = 1.

            preds = model.predict(x_pred, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

print_callback = LambdaCallback(on_epoch_end=on_epoch_end)

model.fit(x, y,
          batch_size=128,
          epochs=60,
          callbacks=[print_callback])
Downloading data from https://s3.amazonaws.com/text-datasets/nietzsche.txt
606208/600901 [==============================] - 0s 0us/step
corpus length: 600893
total chars: 57
nb sequences: 200285
Vectorization...
Build model...
Epoch 1/60
200285/200285 [==============================] - 281s 1ms/step - loss: 1.9553

----- Generating text after Epoch: 0
----- diversity: 0.2
----- Generating with seed: "to
agree with many people. "good" is no "
to
agree with many people. "good" is no and it is the the of the same the of the sention of the strenge of the most the self-our of the inderent that the sensive indeed the one of the constitute of the most of the semple of the desire of the sensive of the most of the semple of the sempathy of the one of the into the every to a soul of the some of the persent the free of the semple of the most of the sention of the of the spiritual the 
----- diversity: 0.5
----- Generating with seed: "to
agree with many people. "good" is no "
to
agree with many people. "good" is no may a suptimes and also orage mankind the one of indeed of one streng the possible the sensition and the inderenation of a sul the in a sould be the orting a solitiarity of religions in a man of such and a scient, in every of and the self-to and of a revilued it is the most in the indeed, and it is assual that the ord of the of the distiture in its all the manter of the soul permans the decours of
----- diversity: 1.0
----- Generating with seed: "to
agree with many people. "good" is no "
to
agree with many people. "good" is no causest and hew the fown of every groktulr
destined a the art it noteriness of one it all and
and cothinded of that rendercaterfroe to doe," in the pational the is the onl yutre
allor upitsoon,--one
viburan mused a "master in the that niver if
a pridicle quesiles of
the shoold enss nowxing to
feef ma.t--wute disequerly that then her rewadd finale the eeblive alse rusurefver" a selovery catte he re
----- diversity: 1.2
----- Generating with seed: "to
agree with many people. "good" is no "
to
agree with many people. "good" is no likeurenes, it is novamentstisuser'stone, indos paces. fund, wethel feel the
que let doee new eveny that is that the catel. thotgy is
within ceoks of theregeritades) and itwas brutmes ageteron
clyrelogilabl freephi; its. by an? andaver happ
one of his absuman artificss? itself old a
ooker himsood and bus hray
fined in smuch is sudtirers of rerarder from and
afutty
mest utfered with to "bewnook one
Epoch 2/60
 81664/200285 [===========>..................] - ETA: 2:37 - loss: 1.6395

Web Scraping with Python Made Easy

Imagine you run a business selling shoes online and wanted to monitor how your competitors price their products. You could spend hours a day clicking through page after page or write a script for a web bot, an automated piece of software that keeps track a site’s updates. That’s where web scraping comes in.

Scraping websites lets you extract information from hundreds or thousands of webpages at once. You can search websites like Indeed for job opportunities or Twitter for tweets. In this gentle introduction to web scraping, we’ll go over the basic code to scrape websites such that anyone, regardless of background, can extract and analyze these kinds of results.

Getting Started

Using my GitHub repository on web scraping, you can install the software and run the scripts as instructed. Click on the src directory on the repository page to see the README.md file that explains each script and how to run them.

Examining the Site

You can use a sitemap file to located where websites upload content without crawling every single web page. Here’s a sample one. You can also find out how large a site is and how much information you can actually extract from it. You can search a site using Google’s Advanced Search to figure out how many pages you may need to scrape. This will come in handy when creating a web scraper that may need to pause for updates or act in a different manner after reaching a certain number of pages.

You can also run the identify.py script in the src directory to figure out more information bout how each site was built. This should give info about the frameworks, programming languages, and servers used in building each website as well as the registered owner for the domain. This also uses robotparser to check for restrictions.

Many websites have a robots.txt file with crawling restrictions. Make sure you check out this file for a website for more information about how to crawl a website or any rules that you should follow. The sample protocol can be found here.

Crawling a Site

There are three general approaches to crawling a site: Crawling a sitemap, Iterating through an ID for each webpage, and following webpage links. download.py shows how to download a webpage with methods of sitemap crawling, results.py shows you how to scrape those results while iterating through webpage IDs, and indeedScrape.py uses the webpage links for crawling. download.py also contains information on inserting delays, returning a list of links from HTML, and supporting proxies that can let you access websites through blocked requests.

Scraping the Data

In the file compare.py, you can compare the efficiency of the three web scraping methods.

You can use regular expressions (known as regex or regexp) to perform neat tricks with text for getting information from websites. The script regex.py shows how this is done.

You can also use the browser extension Firebug Lite to get information from a webpage. In Chrome, you can click View >> Developer >> View Source to get the source behind a webpage.

Beautiful Soup, one of the requried packages to run indeedScrape.py, parses a webpage and provides a convenient interface to navigate the content, as shown in bs4test.py. Lxml also does this in lxmltest.py. A comparison of these three scraping methods are in the following table.

Scraping methodPerformanceEase of useEase of install
RegexFastHardEasy
Beautiful SoupSlowEasyEasy
lxmlFastEasyHard

The callback.py script lets you scrape data and save it to an output .csv file.

Caching Downloads

Caching crawled webpages lets you store them in a manageablae format while only having to download them once. In download.py, there’s a python class Downloader that shows how to cache URLs after downloading their webpages. cache.py has a python class that maps a URL to a filename when caching.

Depending on which operating system you’re using, there’s a limit to how much you can cache.

Operating systemFile systemInvalid filename charactersMax filename length
LinuxExt3/Ext4/, \0255 bytes
OS X HFS Plus:, \0255 UT-16 code units
WindowsNTFS \, /, ?, :, *, >, <, |255 characters

Though cache.py is easy to use, you can take the hash of the URL itself to use as the filename to ensure your files directly map to the URLs of the saved cache. Using MongoDB, you can build ontop of the current file system database system and avoid the file system limitations. This method is found in mongocache.py using pymongo, a Python wrapper for MongoDB.

Test out the other scripts such as alexacb.py for downloading information on the top sites by Alexa ranking. mongoqueue.py has functionality for queueing the MongoDB inquiries that can be imported to other scripts.

You can work with dynamic webpages using the code from browserrender.py. The majority of leading websites using JavScript for functionality, meaning you can’t view all their content in barebones HTML.

An introduction to philosophy

Table of contents

Ethics

Classical ethics

  • Aristotle “Nichomachean Ethics” “On Virtues and Vices”

Christian and Medieval ethics

  • Thomas Aquinas “Summa Theologica”

  • Saint Bonaventure “Commentary on the Sentences”

  • Duns Scotus “Philosophical Writings”

  • William of Ockham “Sum of Logic”

Modern ethics

  • G. E. M. Anscombe “Modern Moral Philosophy”

  • David Gauthier “Morals by Agreement”

  • Alan Gewirth “Reason and Morality”

  • Allan Gibbard “Thinking How to Live”

  • Susan Hurley “Natural Reasons”

  • Christine Korsgaard “The Sources of Normativity”

  • John McDowell “Values and Secondary Qualities”

  • Alasdair MacIntyre “After Virtue”

  • J. L. Mackie “Ethics: Inventing Right and Wrong”

  • G. E. Moore “Principia Ethica”

  • Martha Nussbaum “The Fragility of Goodness”

  • Derek Parfit “Reasons and Persons”

  • Derek Parfit “On What Matters”

  • Peter Railton “Facts, Values, and Norms”

  • W. D. Ross “The Right and the Good”

  • Thomas M. Scanlon “What We Owe to Each Other”

  • Samuel Scheffler “The Rejection of Consequentialism”

  • Peter Singer “Practical Ethics”

  • Michael A. Smith “The Moral Problem”

  • Bernard Williams “Ethics and the Limits of Philosophy”

Postmodern ethics

  • Zygmunt Bauman “Postmodern Ethics”

  • Terry Eagleton “The Illusions of Postmodernism”

Bioethics

  • Don Marquis “Why Abortion is Immoral”

  • Paul Ramsey “The Patient as a Person” “Fabricated Man”

  • Judith Jarvis Thomson “A Defense of Abortion”

Meta-ethics (Metaethics)

  • P. F. Strawson “Freedom and Resentment”

Epistemology

  • Laurence Bonjour “The Structure of Empirical Knowledge”

  • Luc Bovens “Bayesian Epistemology”

  • Stanley Cavell “The Claim of Reason: Wittgenstein, Skepticism, Morality, and Tragedy”

  • Roderick Chisholm “Theory of Knowledge”

  • Keith DeRose “The Case for Contextualism”

  • René Descartes “Discourse on the Method”, “Meditations on First Philosophy”

  • Edmund Gettier “Is Justified True Belief Knowledge?”

  • Alvin Goldman “Epistemology and Cognition” “What is Justified Belief?”

  • Susan Haack “Evidence and Enquiry”

  • Hilary Kornblith “Knowledge and its Place in Nature”

  • Jonathan Kvanvig “The Value of Knowledge and the Pursuit of Understanding”

  • David K. Lewis “Elusive Knowledge”

  • G. E. Moore “A Defence of Common Sense”

  • Willard van Orman Quine “Epistemology Naturalized”

  • Richard Rorty “Philosophy and the Mirror of Nature”

  • Bertrand Russell “The Problems of Philosophy”

  • Jason Stanley “Knowledge and Practical Interest”

  • Stephen Stich “The Fragmentation of Reason”

  • Peter Unger “Ignorance: A Case for Scepticism”

  • Timothy Williamson “Knowledge and its Limits”

Logic

  • Donald Davidson “Truth and Meaning”

  • Gottlob Frege “Begriffsschrift”

  • Kurt Gödel, “On Formally Undecidable Propositions of Principia Mathematica and Related Systems”

  • Saul Kripke, “Semantical Considerations on Modal Logic”

  • Charles Sanders Peirce “How to Make Our Ideas Clear”

  • Alfred Tarski “The Concept of Truth”

Aesthetics

  • Theodor Adorno “Aesthetic Theory”

  • R.G. Collingwood “The Principles of Art”

  • Arthur C. Danto “After the End of Art”

  • Nelson Goodman “Languages of Art: An Approach to a Theory of Symbols”

  • George Santayana “The Sense of Beauty”

Metaphysics

  • Aristotle “Metaphysics”

  • D.M. Armstrong “Universals and Scientific Realism”

  • A. J. Ayer “Language, Truth, and Logic”

  • Rudolf Carnap “Empiricism, Semantics, and Ontology”

  • David Chalmers “Constructing the World”

  • John Dewey “Experience and Nature”

  • William James “Pragmatism”

  • Immanuel Kant “Groundwork of the Metaphysics of Morals”

  • James Ladyman, Don Ross, David Spurrett, John Collier “Every Thing Must Go: Metaphysics Naturalized”

  • John McDowell “Mind and World”

  • David Kellogg Lewis “On the Plurality of Worlds”

  • Stephen Mumford “Dispositions”

  • Derek Parfit “Reasons and Persons”

  • Willard Van Orman Quine “Two Dogmas of Empiricism” “On What There Is”

  • Theodore Sider “Writing the Book of the World”

  • Alfred North Whitehead “Process and Reality”

  • Timothy Williamson “Modal Logic as Metaphysics”

  • Ludwig Wittgenstein “Tractatus Logico-Philosophicus” (a.k.a. The Tractatus)

Philosophy of the mind

  • D. M. Armstrong “A Materialist Theory of the Mind”

  • Peter Carruthers “The Architecture of the Mind”

  • David Chalmers “Philosophy of Mind: Classical and Contemporary Readings” “The Character of Consciousness” “The Conscious Mind: In Search of a Fundamental Theory”

  • Paul Churchland “Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind”

  • Andy Clark “Supersizing the Mind: Embodiment, Action, and Cognitive Extension”

  • Daniel Dennett “Consciousness Explained”

  • Jaegwon Kim “Philosophy of Mind”

  • Ruth Millikan “Varieties of Meaning”

  • Gilbert Ryle “The Concept of Mind”

History of philosophy

Western civilization

  • Bertrand Russell “A History of Western Philosophy”

Classical philosophy

  • Marcus Aurelius “Meditations””

  • Plato “Symposium” “Parmenides” “Phaedrus”

Christian and Medieval

  • Augustine of Hippo “Confessions” “The City of God”

  • Anselm of Canterbury “Proslogion”

Early modern

  • Sir Francis Bacon “Novum Organum”

  • Jeremy Bentham “An Introduction to the Principles of Morals and Legislation”

  • Henri Bergson “Time and Free Will” “Matter and Memory”

  • George Berkeley “Treatise Concerning the Principles of Human Knowledge”

  • Auguste Comte “Course of Positive Philosophy”

  • René Descartes “Principles of Philosophy” “Passions of the Soul”

  • Desiderius Erasmus “The Praise of Folly”

  • Johann Gottlieb Fichte “Foundations of the Science of Knowledge”

  • Hugo Grotius “De iure belli ac pacis”

  • Georg Wilhelm Friedrich Hegel “Phenomenology of Spirit” “Science of Logic” “The Philosophy of Right” “The Philosophy of History”

  • Thomas Hobbes “Leviathan”

  • David Hume “A Treatise of Human Nature” “Four Dissertationss” “Essays, Moral, Political, and Literary” “An Enquiry Concerning Human Understanding” “An Enquiry Concerning the Principles of Morals”

  • Immanuel Kant “A Critique of Pure Reason” “Critique of Practical Reason” “A Critique of Judgement”

  • Søren Kierkegaard “Either/Or” “Fear and Trembling” “The Concept of Anxiety”

  • Gottfried Leibniz “Discourse on Metaphysics” “New Essays Concerning Human Understanding” “Théodicée” “Monadology”

  • John Locke “Two Treatises of Government” “An Essay Concerning Human Understanding”

  • Niccolò Machiavelli “The Prince”

  • Karl Marx “The Communist Manifesto” “Das Kapital”

  • John Stuart Mill “On Liberty “Utilitarianism”

  • John Stuart Mill and Harriet Taylor Mill “The Subjection of Women”

  • Michel de Montaigne “Essays”

  • Friedrich Nietzsche “Thus Spoke Zarathustra” “Beyond Good and Evil” “On the Genealogy of Morals”

  • Blaise Pascal “Pensées”

  • Jean-Jacques Rousseau “Discourse on the Arts and Sciences” “Emile: or, On Education” “The Social Contract”

  • Arthur Schopenhauer “The World as Will and Representation”

  • Henry Sidgwick “The Methods of Ethics”

  • Adam Smith “The Theory of Moral Sentiments” “The Wealth of Nations”

  • Herbert Spencer “System of Synthetic Philosophy”

  • Baruch Spinoza “Ethics” “Tractatus Theologico-Politicus”

  • Max Stirner “The Ego and Its Own”

  • Mary Wollstonecraft “A Vindication of the Rights of Women”

Contemporary

Phenomenology and existentialism
  • Simone de Beauvoir “The Second Sex”

  • Albert Camus “Myth of Sisyphus”

  • Martin Heidegger “Being and Time”

  • Edmund Husserl “Logical Investigations” “Cartesian Meditations” “Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy”

  • Maurice Merleau-Ponty “Phenomenology of Perception”

  • Jean-Paul Sartre, “Being and Nothingness” “Critique of Dialectical Reason”

Hermeneutics and deconstruction
  • Jacques Derrida “Of Grammatology”

  • Hans-Georg Gadamer “Truth and Method”

  • Paul Ricœur “Freud and Philosophy: An Essay on Interpretation”

Structuralism and post-structuralism
  • Michel Foucault “The Order of Things”

  • Gilles Deleuze “Difference and Repetition”

  • Gilles Deleuze and Felix Guattari “Capitalism and Schizophrenia”

  • Luce Irigaray “Speculum of the Other Woman”

  • Michel Foucault “Discipline and Punish”

Critical theory and Marxism
  • Theodor Adorno “Negative Dialectics”

  • Louis Althusser “Reading Capital”

  • Alain Badiou “Being and Event”

  • Jürgen Habermas “Theory of Communicative Action”

  • Max Horkheimer and Theodor Adorno “Dialectic of Enlightenment”

  • Georg Lukacs “History and Class Consciousness”

  • Herbert Marcuse “Reason and Revolution” “Eros and Civilization”

Eastern civilization

Chinese philosophy

  • “The Record of Linji”

  • Han Fei “Han Feizi”

  • Kongzi “Analects” “Five Classics”

  • Laozi “Dao De Jing”

  • Mengzi “Mengzi”

  • Sunzi “Art of War”

  • Zhou Dunyi “The Taiji Tushuo”

  • Zhu Xi “Four Books” “Reflections on Things at Hand”

Indian philosophy

  • “The Upanishads”

  • “The Bhagavad Gita” (“The Song of God”)

  • Aksapada Gautama “Nyaya Sutras”

  • Isvarakrsna “Sankhya Karika”

  • Kanada “Vaisheshika Sutra”

  • Patañjali “Yoga Sutras”

  • Swami Swatamarama “Hatha Yoga Pradipika”

  • Vyasa “Brahma Sutras”

  • Tami “Thiruvalluvar”

Islamic philosophy

  • Al-Ghazali “The Incoherence of the Philosophers”

Japanese philosophy

  • Hakuin Ekaku “Wild Ivy”

  • Honen “One-Sheet Document”

  • Kukai “Attaining Enlightenment in this Very Existence”

  • Zeami Motokiyo “Style and Flower”

  • Miyamoto Musashi “The Book of Five Rings”

  • Shinran “Kyogyoshinsho”

  • Dogen Zenji “Shōbōgenzō”

Philosophy of other disciplines

Education

  • John Dewey “Democracy and Education”

  • Terry Eagleton “The Slow Death of the University”

  • Paulo Freire “Pedagogy of the Oppressed”

  • Martha Nussbaum “Not for Profit: Why Democracy Needs the Humanities”

  • B.F. Skinner “Walden Two”

  • Charles Weingartner and Neil Postman “Teaching as a Subversive Activity”

Religion

  • William Lane Craig “The Kalam Cosmological Argument”

  • J. L. Mackie “The Miracle of Theism”

  • Dewi Zephaniah Phillips “Religion Without Explanation”

  • Alvin Plantinga “God and Other Minds” “Is Belief in God Properly Basic”

  • William Rowe “The Evidential Argument from Evil: A Second Look”

  • J. L. Schellenberg “Divine Hiddenness and Human Reason”

  • Richard Swinburne “The Existence of God”

Science

  • Paul Feyerabend “Against Method: Outline of an Anarchistic Theory of Knowledge”

  • Bas C. van Fraassen “The Scientific Image”

  • Nelson Goodman “Fact, Fiction, and Forecast”

  • Thomas Samuel Kuhn “The Structure of Scientific Revolutions”

  • Larry Laudan “The Demise of the Demarcation Problem”

  • David K. Lewis “How to Define Theoretical Terms”

  • Karl Pearson “The Grammar of Science”

  • Karl Popper “The Logic of Scientific Discovery”

  • Hans Reichenbach “The Rise of Scientific Philosophy”

Mathematics

  • Alfred North Whitehead and Bertrand Russell “Principia Mathematica”

  • Paul Benacerraf “What Numbers Could not Be” “Mathematical Truth”

  • Paul Benacerraf and Hilary Putnam “Philosophy of Mathematics: Selected Readings”

  • George Boolos “Logic, Logic and Logic”

  • Hartry Field “Science without Numbers: The Defence of Nominalism”

  • Imre Lakatos “Proofs and Refutations”

  • Penelope Maddy “Second Philosophy”

Physics

  • Aristotle “Physics”

  • Michel Bitbol “Mécanique quantique : Une introduction philosophique” “Schrödinger’s Philosophy of Quantum Mechanics”

  • Chris Isham and Jeremy Butterfield “On the Emergence of Time in Quantum Gravity”

  • Tim Lewens “The Meaning of Science: An Introduction to the Philosophy of Science”

Computer science

  • Scott Aaronson “Why Philosophers Should Care About Computational Complexity”

  • Judea Pearl “Causality”

  • Ray Turner “The Philosophy of Computer Science” “Computational Artefacts-Towards a Philosophy of Computer Science”

Neuroscience

  • John Bickle “Revisionary Physicalism” “Psychoneural Reduction of the Genuinely Cognitive: Some Accomplished Facts” “Psychoneural Reduction: The New Wave” ” Philosophy and Neuroscience: A Ruthlessly Reductive Account”

  • Patricia Churchland “Brain-Wise : Studies in Neurophilosophy” “Neurophilosophy : Toward a Unified Science of the Mind-Brain”

  • Carl Craver “Explaining the brain : mechanisms and the mosaic unity of neuroscience”

  • Georg Northoff “Philosophy of the Brain: The brain problem”

  • Henrik Walter “Neurophilosophy of Free Will: From Libertarian Illusions to a Concept of Natural Autonomy”

Chemistry

  • Jaap van Brakel “Philosophy of Chemistry”

Biology

  • Daniel C. Dennett “Darwin’s Dangerous Idea”

  • Ruth Garrett Millikan “Language, Thought, and Other Biological Categories”

  • Erwin Schrödinger, What is Life? The Physical Aspect of the Living Cell”

  • Elliott Sober “The Nature of Selection”

Sociology

  • B. F. Skinner “Science and Human Behavior”

Psychology

  • Donald Davidson “The Very Idea of a Conceptual Scheme”

  • William James “The Principles of Psychology”

Economics

  • Kenneth Arrow “Social Choice and Individual Values”

  • Ludwig von Mises “The Ultimate Foundation of Economic Science”

  • Elizabeth S. Anderson “Value in Ethics and Economics”

Arts and Humanities

  • Bernard Williams “Philosophy as a Humanistic Discipline”

Art

  • Clive Bell “Art”

  • George Dickie “Art and the Aesthetic”

Music

  • Roger Scruton “Music as an Art”

Literature

  • Aristotle “Poetics”

Language

  • J. L. Austin, “A Plea for Excuses” “How To Do Things With Words”

  • Robert Brandom “Making it Explicit”

  • Stanley Cavell “Must We Mean What We Say?”

  • David Chalmers “Two Dimensional Semantics”

  • Cora Diamond “What Nonsense Might Be”

  • Michael Dummett “Frege: Philosophy of Language”

  • Gottlob Frege “On Sense and Reference”

  • H. P. Grice “Logic and Conversation”

  • Saul Kripke “Naming and Necessity”

  • David K. Lewis “General Semantics”

  • Willard Van Orman Quine “Word and Object”

  • Bertrand Russell “On Denoting”

  • John Searle “Speech Acts”

  • Ludwig Wittgenstein “Philosophical Investigations”

History

  • R.G. Collingwood “The Idea of History”

  • Karl Löwith “Meaning in History: The Theological Implications of the Philosophy of History”

Medicine

  • Mario Bunge “Medical Philosophy: Conceptual Issues in Medicine”

  • R. Paul Thompson and Ross E. G. Upshur “Philosophy of Medicines”

Law

  • Ronald Dworkin “Law’s Empire”

  • John Finnis “Natural Law and Natural Rights”

  • Lon L. Fuller “The Morality of Law”

  • H.L.A. Hart “The Concept of Law”

Politics

  • Aristotle “Politics”

  • Isaiah Berlin “Two Concepts of Liberty”

  • Robert Nozick “Anarchy, State, and Utopia”

  • Plato “Republic”

  • Karl Popper “The Open Society and Its Enemies”

  • John Rawls “A Theory of Justice”

  • Michael Sandel “Liberalism and the Limits of Justice”

An introduction to ethics

Table of contents

  • What is ethics?
  • Reading

    What is ethics?

    Ethics is approximately about the questions to do with the nature, content, and application of morality, and so is the study of morality in general.

    Questions of moral language, psychology, phenomonenology, epistemology, and ontology typically fall under metaethics.

    Questions of theoretical content, what makes something right, wrong, good, bad, obligatory, or supererogatory typically fall under normative ethics.

    Questions of conduct related to specific issues in the real world to do with business, professional, social, environmental, bioethics, and personhood typically fall under applied ethics. These can be things like abortion, euthanasia, treatment of non-human animals, marketing, and charity.

    Ethics has been divided traditionally into three areas concerning how we ought to conduct ourselves.

    Meta-ethics (Metaethics)

    Metaethics is occasionally referred to as a “second-order” discipline to make a distinction between itself and areas that are less about questions regarding what morality itself is. Questions about the most plausible metaphysical report of moral facts or the link between moral judgment, motivation, and knowledge are questions can be described as such, and so are metaethical questions. There are several rough divisions that have been created to introduce metaethics adequately. Either of these distinctions should be sufficient for getting a distant sense of what metaethics is.

    Metaethics as the systematic analysis of moral language, psychology, and ontology

    In Andrew Fisher’s Metaethics: An Introduction, an intro book Fisher at one point playfully thought of as “An Introduction to An Introduction to Contemporary Metaethics,” we get this:

    Looking at ethics we can see that it involves what people say: moral language. So one strand of metaethics considers what is going on when people talk moral talk. For example, what do people mean when they say something is “wrong”? What links moral language to the world? Can we define moral terms?

    Obviously ethics also involves people, so metaethicists consider and analyse what’s going on in peoples’ minds. For example, when people make moral judgements are they expressing beliefs or expressing desires? What’s the link between making moral judgements and motivation?

    Finally, there are questions about what exists (ontology). Thus meta-ethicists ask questions about whether moral properties are real. What is it for something to be real? Could moral facts exist independently of people? Could moral properties be causal?

    Metaethics, then, is the systematic analysis of:

    (a) moral language; (b) moral psychology; (c) moral ontology. This classification is rough and does not explicitly capture a number of issues that are often discussed in metaethics, such as truth and phenomenology. However, for our purposes we can think of such issues as falling under these broad headings.

    Metaethics as concerned with meaning, metaphysics, epistemology and justification, phenomenology, moral psychology, and objectivity

    In Alex Miller’s Contemporary Metaethics: An Introduction (the book Fisher playfully compared his own introduction to), Miller provides us with perhaps the most succinct description of the three:

    [Metaethics is] concerned with questions about the following:

    (a) Meaning: what is the semantic function of moral discourse? Is the function of moral discourse to state facts, or does it have some other non-fact-stating role? (b) Metaphysics: do moral facts (or properties) exist? If so, what are they like? Are they identical or reducible to natural facts (or properties) or are they irreducible and sui generis? (c) Epistemology and justification: is there such a thing as moral knowledge? How can we know whether our moral judgements are true or false? How can we ever justify our claims to moral knowledge? (d) Phenomenology: how are moral qualities represented in the experience of an agent making a moral judgement? Do they appear to be ‘out there’ in the world? (e) Moral psychology: what can we say about the motivational state of someone making a moral judgement? What sort of connection is there between making a moral judgement and being motivated to act as that judgement prescribes? (f) Objectivity: can moral judgements really be correct or incorrect? Can we work towards finding out the moral truth? Obviously, this list is not intended to be exhaustive, and the various questions are not all independent (for example, a positive answer to (f) looks, on the face of it, to presuppose that the function of moral discourse is to state facts). But it is worth noting that the list is much wider than many philosophers forty or fifty years ago would have thought. For example, one such philosopher writes:

    [Metaethics] is not about what people ought to do. It is about what they are doing when they talk about what they ought to do. (Hudson 1970)

    The idea that metaethics is exclusively about language was no doubt due to the once prevalent idea that philosophy as a whole has no function other than the study of ordinary language and that ‘philosophical problems’ only arise from the application of words out of the contexts in which they are ordinarily used. Fortunately, this ‘ordinary language’ conception of philosophy has long since ceased to hold sway, and the list of metaethical concerns – in metaphysics, epistemology, phenomenology, moral psychology, as well as in semantics and the theory of meaning – bears this out.

    Two small notes that might be made are:

    “Objectivity” is standardly taken to mean mind-independence. Here, it almost seems as if it’s cognitivism that the author is describing, but it’s made clear by the author noting that (f) presupposes facts that when Miller says “correct,” Miller means “objectively true.” This is a somewhat unorthodox usage, but careful reading makes it clear what Miller is trying to say.

    “Moral phenomenology” is often categorized as falling under normative ethics as well, but this has little impact on the veracity of this description of metaethics.

    Applied ethics

    Applied ethics is concerned with what is permissible in particular practices. In Peter Singer’s Practical Ethics, Singer provides some examples of what sorts of things this field might address.

    Practical ethics covers a wide area. We can find ethical ramifications in most of our choices, if we look hard enough. This book does not attempt to cover this whole area. The problems it deals with have been selected on two grounds: their relevance, and the extent to which philosophical reasoning can contribute to a discussion of them.

    I regard an ethical issue as relevant if it is one that any thinking person must face. Some of the issues discussed in this book confront us daily: what are our personal responsibilities towards the poor? Are we justified in treating animals as nothing more than machines- producing flesh for us to eat? Should we be using paper that is not recycled? And why should we bother about acting in accordance with moral principles anyway? Other problems, like abortion and euthanasia, fortunately are not everyday decisions for most of us; but they are issues that can arise at some time in our lives. They are also issues of current concern about which any active participant in our society’s decision-making process needs to reflect.

    ….

    This book is about practical ethics, that is, the application of ethics or morality — I shall use the words interchangeably — to practical issues like the treatment of ethnic minorities, equality for women, the use of animals for food and research, the preservation of the natural environment, abortion, euthanasia, and the obligation of the wealthy to help the poor.

    So what does the application of ethics to practical issues look like?

    We can take a look at two of the issues that Singer brings up — abortion and animal rights — to get a sense of what sort of evidence might be taken into consideration with these matters. Keep in mind that this is written with the intention of providing a sense of how discussions in applied ethics develop rather than a comprehensive survey of views in each topic.

    Abortion

    In Rosalind Hursthouse’s Virtue Theory and Abortion, Hursthouse gives a summary of the discussion on abortion as to do with the struggle between facts about the moral status of the fetus and women’s rights.

    As everyone knows, the morality of abortion is commonly discussed in relation to just two considerations: first, and predominantly, the status of the fetus and whether or not it is the sort of thing that may or may not be innocuously or justifiably killed; and second, and less predominantly (when, that is, the discussion concerns the morality of abortion rather than the question of permissible legislation in a just society), women’s rights.

    Judith Jarvis Thomson, in A Defense of Abortion, Thomson addresses a common version of the former consideration, refuting the slippery slope argument.

    Most opposition to abortion relies on the premise that the fetus is a human being, a person, from the moment of conception. The premise is argued for, but, as I think, not well. Take, for example, the most common argument. We are asked to notice that the development of a human being from conception through birth into childhood is continuous; then it is said that to draw a line, to choose a point in this development and say “before this point the thing is not a person, after this point it is a person” is to make an arbitrary choice, a choice for which in the nature of things no good reason can be given. It is concluded that the fetus is, or anyway that we had better say it is, a person from the moment of conception. But this conclusion does not follow. Similar things might be said about the development of an acorn into an oak trees, and it does not follow that acorns are oak trees, or that we had better say they are. Arguments of this form are sometimes called “slippery slope arguments”–the phrase is perhaps self-explanatory–and it is dismaying that opponents of abortion rely on them so heavily and uncritically.

    Nonetheless, Thomson is willing to grant the premise, addressing instead whether or not we can make the case that abortion is impermissible given that the fetus is, indeed, a person. Thomson thinks that the argument that fetuses have the right to life and that right outweighs the right for the individual carrying the fetus to do as they wish with their body is faulty, but notes a limitation.

    But now let me ask you to imagine this. You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, “Look, we’re sorry the Society of Music Lovers did this to you–we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.” Is it morally incumbent on you to accede to this situation? No doubt it would be very nice of you if you did, a great kindness. But do you have to accede to it? What if it were not nine months, but nine years? Or longer still? What if the director of the hospital says. “Tough luck. I agree, but now you’ve got to stay in bed, with the violinist plugged into you, for the rest of your life. Because remember this. All persons have a right to life, and violinists are persons. Granted you have a right to decide what happens in and to your body, but a person’s right to life outweighs your right to decide what happens in and to your body. So you cannot ever be unplugged from him.” I imagine you would regard this as outrageous, which suggests that something really is wrong with that plausible-sounding argument I mentioned a moment ago.

    In this case, of course, you were kidnapped, you didn’t volunteer for the operation that plugged the violinist into your kidneys.

    Thomson goes on to address this limitation and goes back and forth between the issue of the fetus’s and carrier’s rights, but Hursthouse (see above) rejects this framework, noting in more detail that we can suppose that women have a right to abortion in a legal sense and still have to wrestle with whether or not abortion is permissible. On the status of fetuses, Hursthouse claims this too can be bypassed with virtue theory.

    What about the consideration of the status of the fetus-what can virtue theory say about that? One might say that this issue is not in the province of any moral theory; it is a metaphysical question, and an extremely difficult one at that. Must virtue theory then wait upon metaphysics to come up with the answer?

    ….

    But the sort of wisdom that the fully virtuous person has is not supposed to be recondite; it does not call for fancy philosophical sophistication, and it does not depend upon, let alone wait upon, the discoveries of academic philosophers. And this entails the following, rather startling, conclusion: that the status of the fetus-that issue over which so much ink has been spilt-is, according to virtue theory, simply not relevant to the rightness or wrongness of abortion (within, that is, a secular morality).

    Or rather, since that is clearly too radical a conclusion, it is in a sense relevant, but only in the sense that the familiar biological facts are relevant. By “the familiar biological facts” I mean the facts that most human societies are and have been familiar with-that, standardly (but not invariably), pregnancy occurs as the result of sexual intercourse, that it lasts about nine months, during which time the fetus grows and develops, that standardly it terminates in the birth of a living baby, and that this is how we all come to be.

    It is worth noting that Hursthouse’s argument more centrally gives her conception of what virtue ethics ought to look like rather than how we should go about abortion, and so to avoid it clouding her paper, she never takes any stance on whether one should think abortion is or is not permissible.

    Thomson’s argument appears to be rather theory-agnostic whereas Hursthouse is committed to a certain theory of ethics. A third approach is intertheoretical, an example of which can be found in Tomasz Żuradzki’s Meta-Reasoning in Making Moral Decisions under Normative Uncertainty. Here, Żuradzki discusses how we might deal with uncertainty over which theory is correct.

    For example, we have to act in the face of uncertainty about the facts, the consequences of our decisions, the identity of people involved, people’s preferences, moral doctrines, specific moral duties, or the ontological status of some entities (belonging to some ontological class usually has serious implications for moral status). I want to analyze whether these kinds of uncertainties should have practical consequences for actions and whether there are reliable methods of reasoning that deal with the possibility that we understand some crucial moral issues wrong.

    Żuradzki at one point considers the seemingly obvious “My Favorite Theory” approach, but concludes that the approach is problematic.

    Probably the most obvious proposition how to act under normative uncertainty is My Favorite Theory approach. It says that “a morally conscientious agent chooses an option that is permitted by the most credible moral theory”

    ….

    Although this approach looks very intuitive, there are interesting counter-examples.

    Żuradzki also addresses a few different approaches, some of which seem to make abortion impermissible so long as there is uncertainty, but perhaps this gives a good idea of three approaches in applied ethics.

    Animal rights

    In the abortion section, the status of the fetus falls into the background. Thomson says even given a certain status, the case against abortion must do more, Hursthouse says the metaphysical question can be bypassed altogether, and Żuradzki considers how to take multiple theories about an action into account. But it seems this strategy of moving beyond the status of the patient in question cannot be done when it comes to the question of how we ought to treat non-human animals, for there’s no obvious competing right that might give us pause when we decide not to treat a non-human animal cruelly. In dealing with animal rights, then, it appears we are forced to address the status of the non-human animal, and there seem to be many ways to address this.

    In Tom Regan’s The Case for Animal Rights, Regan, who agrees with Kant that those who are worthy of moral consideration are ends-in-themselves, thinks what grounds that worthiness in humans is also what grounds that in non-human animals.

    We want and prefer things, believe and feel things, recall and expect things. And all these dimensions of our life, including our pleasure and pain, our enjoyment and suffering, our satisfaction and frustration, our continued existence or our untimely death – all make a difference to the quality of our life as lived, as experienced, by us as individuals. As the same is true of those animals that concern us (the ones that are eaten and trapped, for example), they too must be viewed as the experiencing subjects of a life, with inherent value of their own.

    Christine Korsgaard, who also agrees with a Kantian view, argues against Regan’s view and thinks non-human animals are not like humans. In Fellow Creatures: Kantian Ethics and Our Duties to Animals, Korsgaard makes the case that humans are rational in a sense that non-human animals are not, and that rationality is what grounds our moral obligations.

    an animal who acts from instinct is conscious of the object of its fear or desire, and conscious of it as fearful or desirable, and so as to-be-avoided or to-be-sought. That is the ground of its action. But a rational animal is, in addition, conscious that she fears or desires the object, and that she is inclined to act in a certain way as a result.

    ….

    We cannot expect the other animals to regulate their conduct in accordance with an assessment of their principles, because they are not conscious of their principles. They therefore have no moral obligations.

    Korsgaard, however, still thinks this difference that makes the sense in which humans and non-human animals should be considered fundamentally distinct still leaves room for animals to be worthy of moral consideration.

    Because we are animals, we have a natural good in this sense, and it is to this that our incentives are directed. Our natural good, like the other forms of natural good which I have just described, is not, in and of itself, normative. But it is on our natural good, in this sense, that we confer normative value when we value ourselves as ends-in-ourselves. It is therefore our animal nature, not just our autonomous nature, that we take to be an end-in-itself.

    ….

    In taking ourselves to be ends-in-ourselves we legislate that the natural good of a creature who matters to itself is the source of normative claims. Animal nature is an end-in-itself, because our own legislation makes it so. And that is why we have duties to the other animals.

    So Regan thinks that we can elevate the status of non-human animals up to something like the status of humans, but Korsgaard thinks there is a vast difference between the two categories. Before we consider which view is more credible, we should consider an additional, non-Kantian view which seems to bypass the issue of status once more.

    Rosalind Hursthouse (again!), in Applying Virtue Ethics to Our Treatment of the Other Animals, argues that status need not be relevant for roughly the same reasons as the case of abortion.

    In the abortion debate, the question that almost everyone began with was “What is the moral status of the fetus?”

    ….

    The consequentialist and deontological approaches to the rights and wrongs of the ways we treat the other animals (and also the environment) are structured in exactly the same way. Here too, the question that must be answered first is “What is the moral status of the other animals…?” And here too, virtue ethicists have no need to answer the question.

    So Hursthouse once again reframes the argument and grounds her argument in terms of virtue.

    So I take the leaves on which [Singer describes factory farming] and think about them in terms of, for example, compassion, temperance, callousness, cruelty, greed, self-indulgence—and honesty.

    Can I, in all honesty, deny the ongoing existence of this suffering? No, I can’t. I know perfectly well that althrough there have been some improvements in the regulation of factory farming, what is going on is still terrible. Can I think it is anything but callous to shrug this off and say it doesn’t matter? No, I can’t. Can I deny that the practices are cruel? No, I can’t.

    ….

    The practices that bring cheap meat to our tables are cruel, so we shouldn’t be party to them.

    Żuradzki’s argument in Meta-Reasoning in Making Moral Decisions under Normative Uncertainty becomes relevant once more as well. In it, he argues that if between the competing theories, one says something is wrong and one says nothing of the matter, it would be rational to act as if it were wrong.

    Comparativism in its weak form can be applied only to very specific kinds of situations in which an agent’s credences are not divided between two different moral doctrines, but between only one moral doctrine and some doctrine (or doctrines) that does not give any moral reasons. Its conclusion says that if some theories in which you have credence give you subjective reason to choose action A over action B, and no theories in which you have credence give you subjective reason to choose action B over action A, then you should (because of the requirements of rationality) choose A over B.

    Once again, we see a variety of approaches that help give us a sense of the type of strategies that applied ethicists might use. Here, we have arguments that accept and reject a central premise of the debate, an argument that bypasses it, and an argument that considers both views. Some approaches are theory-specific, some are intertheoretical, and while it was not discussed here, Singer’s argument from marginal cases is theory-neutral.

    Other issues will differ wildly, they will rely on different central premises, have arguments such that intertheoretical approaches are impossible, or have any number of other variations on the similarities and differences between the discussions on the two topics just discussed. However, this gives some idea, hopefully enough to build on if one chooses to look deeper into the literature, of how discussions in the area of applied ethics go about.

    Normative ethics

    Normative ethics deals very directly with the question of conduct. Much of the discipline is dedicated to discovering ethical theories capable of describing what we ought to do. But what does ought mean? In different contexts, while ought tends to deal with normativity and value, it does not always deal with ethics. The oughts that link aesthetics and normativity are not obviously the same as the oughts that we’re dealing with here. The questions of what oughts exist in normative ethics have a great deal to do with concepts like what is “permissible” or “impermissible,” what is “right” or “wrong,” or what is “good” and “bad.” It should be contrasted with how people do act, as well as the moral code of some person or group. These are not what normative ethics is about, but rather what genuinely is correct when it comes to how we ought to live our lives. For now, we can roughly divide the main theories of this area into three categories, though these are not the only categories: consequentialism, deontology, and virtue theory. As noted, there are other theories, and there are even other problems in normative ethics as well, but these three types of theories will be detailed below as well as what we should take from an understanding of the three categories.

    Ethics as grounded in outcomes: Consequentialism

    Consequentialism is a family of theories that are centrally concerned with consequences. Consequentialism, in ordinary practice, is used to refer to theories rooted in classical utilitarianism (even when the theory is not utilitarianism itself), ignoring certain theories that also seem grounded solely in consequences such as egoism. The aforementioned classical utilitarianism that serves as the historical and conceptual root of this discussion entailed a great deal of claims, laid out in Shelly Kagan’s Normative Ethics:

    that goodness of outcomes is the only morally relevant factor in determining the status of a given act. the agent is morally required to perform the act with the best consequences. It is not sufficient that an act have “pretty good” consequences, that it produce more good than harm, or that it be better than average. Rather, the agent is required to perform the act with the very best outcome (compared to alternatives); she is required to perform the optimal act, as it is sometimes called. the agent is morally required to performed the act with the best consequences. The optimal act is the only act that is morally permissible; no other act is morally right. Thus the consequentialist is not making the considerably more modest claim that performing the act with the best consequences is—although generally not obligatory—the nicest or the most praiseworthy thing to do. Rather, performing the optimal act is morally required: anything else is morally forbidden. the right act is the act that leads to the greatest total amount of happiness overall. the consequences [are evaluated] in terms of how they affect everyone’s well-being…

    And of course, these can be divided even further, but what’s salient is there appear to be a great many more claims entailed in this classical form of utilitarianism than one might think first glance: classical utilitarianism is an agent-neutral theory in which acts that actually result in the optimal amount of happiness for everyone is obligatory. By understanding all of these points, we can understand how consequentialism differs from this classical utilitarianism and thus what it means to be consequentialist.

    The limits of contemporary consequentialism

    Many of these claims don’t seem necessary to the label “consequentialism” and give us an unnecessarily narrow sense of what the word could mean.

    It seems desirable to want to broaden the scope of the term then, and in fact, this hasn’t only been done simply to help understand consequentialism, but to defend against criticisms of consequentialism. In Campbell Brown’s Consequentialize This, we get a brief description of one motivation behind radical consequentializing:

    You—a nonconsequentialist, let’s assume—begin with your favorite counterexample. You describe some action…[that] would clearly have the best consequences, yet equally clearly would be greatly immoral. So consequentialism is false, you conclude; sometimes a person ought not to do what would have best consequences. “Not so fast,” comes the consequentialist’s reply. “Your story presupposes a certain account of what makes consequences better or worse, a certain ‘theory of the good,’ as we consequentialists like to say. Consequentialism, however, is not wedded to any such theory…In order to reconcile consequentialism with the view that this action you’ve described is wrong, we need only to find an appropriate theory of the good, one according to which the consequences of this action would not be best. You say you’re concerned about the guy’s rights? No worries; we’ll just build that into your theory of the good. Then you can be a consequentialist too.”

    So, Brown says, this is what has just occurred:

    Instead of showing that your nonconsequentialism is mistaken, the consequentialist shows that it’s not really nonconsequentialism; instead of refuting your view, she ‘consequentializes’ it. If you can’t beat ’em, join ’em. Better still, make ’em join you.

    Is this a good strategy? Brown thinks not, for it weakens the consequentialist’s claim.

    It might succeed in immunizing consequentialism against counterexamples only at the cost of severely weakening it, perhaps to the point of utter triviality. So effortlessly is the strategy deployed that some are led to speculate that it is without theoretical limits: every moral view may be dressed up in consequentialist clothing…But then, it seems, consequentialism would be empty—trivial, vacuous, without substantive content, a mere tautology. The statement that an action is right if and only if (iff) it maximizes the good would entail nothing more substantive than the statement that an action is right iff it is right; true perhaps, but not of much use.

    So not too broad, not too narrow, and not too shifty. We want some sort of solid and only sufficiently broad meaning to jump from. Brown goes on to define what he thinks consequentialism minimally is and three limits must be placed upon it.

    whatever is meant by ‘consequentialism’, it must be intelligible as an elaboration of the familiar consequentialist slogan “Maximize the good.” The non-negotiable core of consequentialism, I shall assume, is the claim that an action is right, or permissible, iff it maximizes the good. My strategy is to decompose consequentialism into three conditions, which I call ‘agent neutrality’, ‘no moral dilemmas’, and ‘dominance’ As usually defined, a theory is agent-relative iff it gives different aims to different agents; otherwise it’s agent-neutral. By a moral dilemma, I mean a situation in which a person cannot avoid acting wrongly…Consider, for example, a theory which holds that violations of rights are absolutely morally forbidden; it is always wrong in any possible situation to violate a right. Suppose, further, that the catalog of rights endorsed by this theory is such that sometimes a person cannot help but violate at least one right. Then this theory cannot be represented by a rightness function which satisfies NMD, and so it cannot be consequentialized. [Dominance] may be the least intuitive of the three. It requires the following. Suppose that in a given choice situation, two worlds x and y are among the alternatives. And suppose in this situation, x is right and y wrong. Then x dominates y in the following sense: y cannot be right in any situation where x is an alternative; the presence of x is always sufficient to make y wrong.

    And there we have it, a definition of consequentialism. Not only that, but this definition is formalized in the paper as well. Can we safely say, then, that this is the definition of consequentialism? The most comprehensive, elucidating, uncontroversial in the field? Certainly not! In fact, it leaves out several significant forms of consequentialism, but this formulation of consequentialism captures many concepts important consequentialism, sufficient for further discussion over the three families. This disagreement over the definition might bring a new set of worries to the mind of any reader. The problem of disagreement will be discussed in another section.

    Ethics as grounded in moral law: Deontology

    Deontology is another family of theories whose definition can wiggle through our grasp (there’s a pattern here to recognize that will become important in a later section). Once more, Shelly Kagan’s Normative Ethics offers us a definition of deontology as it is used in contemporary discourse: a theory that places value on additional factors that would forbid certain actions independently of whether or not they result in the best outcomes.

    In defining deontology, I have appealed to the concept of a constraint: deontologists, unlike consequentialists, believe in the existence of constraints, which erect moral barriers to the promotion of the good…it won’t quite do to label as deontologists all those who accept additional normative factors, beyond that of goodness of results: we must add further stipulation that in at least some cases the effect of these additional factors is to make certain acts morally forbidden, even though these acts may lead to the best possible results overall. In short, we must say that deontologists are those who believe in additional normative factors that generate constraints.

    Kagan goes on to explain why of the various definitions, this one is best. That explanation will not be detailed here, but let’s keep this tenuously in mind as we dive into one of the deontological theories to give us a sense of what deontology entails. It would be absurd if these constraints were arbitrary, nothing more than consequentialism combined with “also, don’t do these specific things because they seem icky and I don’t like them,” so we will take a look at one of the prominent deontological theories: Kantianism.

    Kant’s First Formula

    In Julia Driver’s Ethics: The Fundamentals, Driver introduces us to deontology through Kant’s moral theory, saying this of the theory:

    Immanuel Kant’s theory is perhaps the most well-known exemplar of the deontological approach…whether or not a contemplated course of action is morally permissible will depend on whether or not it conforms to what he terms the moral law, the categorical imperative.

    There’s a tone here that seems noticeably different from consequentialist talk. Permissibility as conforming to moral law could still be consequentialist if that law is something like “maximize the good,” but this description seems to indicate something else. To figure this out, we need an explanation of what “the categorical imperative” means. In Christine Korsgaard’s Creating the Kingdom of Ends:

    Hypothetical imperatives [are] principles which instruct us to do certain actions if we want certain ends…

    ….

    Willing something is determining yourself to be the cause of that thing, which means determining yourself to use the available causal connections — the means — to it. “Willing the end” is already posited as the hypothesis, and we need only analyze it to arrive at willing the means. If you will to be able to play the piano, then you already will to practice, as that is the “indispensably necessary means to it” that “lie in your power.” But the moral ought is not expressed by a hypothetical imperative. Our duties hold for us regardless of what we want. A moral rule does not say “do this if you want that” but simply “do this.” It is expressed in a categorical imperative. For instance, the moral law says that you must respect the rights of others. Nothing is already posited, which can then be analyzed.

    We now have a fairly detailed description of what the distinction between a hypothetical and categorical imperative is, with fine examples to boot. Note that already, it’s clear this theory can’t be consequentialized according to Brown, but we must go further to remove any doubt as a result of controversy over Brown’s formulation. Korsgaard goes on to explain what is necessarily entailed as a part of the categorical imperative in her description of Kant’s first formula.

    If we remove all purposes — all material — from the will, what is left is the formal principle of the will. The formal principle of duty is just that it is duty — that it is law. The essentially character of law is universality. Therefore, the person who acts from duty attends to the universality of his/her principle. He or she only acts on a maxim that he or she could will to be universal law (G 402).

    ….

    But how can you tell whether you are able to will your maxim as a universal law? On Kant’s view, it is a matter of what you can will without contradiction…you envision trying to will your maxim in a world in which the maxim is universalized — in which it is a law of nature. You are to “Ask yourself whether, if the action which you propose should take place by a law of nature of which you yourself were a part, you could regard it as possible through your will” (C2 69)

    Already, upon encountering this first formulation of the categorical imperative, we have now well established that any limit on consequentialization would leave Kant’s moral theory able to resist it. For one, the rightness or wrongness of actions is conforming to moral law such that the outcomes are no longer centrally a point of consideration. This does not mean we have deprived ethics of consequences, as Kagan points out in Normative Ethics:

    [the goodness of outcomes]

    is a factor I think virtually everyone recognizes as morally relevant. It may not be the only factor that is important for determining the moral status of an act, but it is certainly one relevant factor.

    Kantianism is notwithstanding deciding the status of actions not on the sole basis of outcomes. As well, it fails Brown’s dominance formulation.

    The two other formulas are not within the scope of this section, nor is evidence for Kant’s theory. The purpose of detailing Kantianism at all was to demonstrate deontology as conforming to moral law in a manner distinct from consequentialism. As well, it is sufficient to remind ourselves that there is a massive amount of evidence for each of these types of theories without having to detail it in this section for this theory in particular. As well, there are other types of deontological theories, also with a great deal of evidence. Scanlon’s moral theory and Ross’s moral theory are other prominent examples of deontology.

    We are now left with a fairly strong sense of what deontological theories look like. There is some imprecision in that sense, this will be discussed in another section. For now, we must move on to address virtue ethics.

    Ethics as grounded in character: Virtue Ethics

    Virtue ethics, the final family of theories described in the section on normative ethics, is predictably concerned primarily with virtue and practical intelligence.

    Virtue

    A virtue is described as lasting, reliable, and characteristic in Julia Annas’s Intelligent Virtue:

    A virtue is a lasting feature of a person, a tendency for the person to be a certain way. It is not merely a lasting feature, however, one that just sits there undisturbed. It is active: to have it is to be disposed to act in certain ways. And it develops through selective response to circumstances. Given these points, I shall use the term persisting rather than merely lasting. Jane’s generosity, supposing her to be generous, persists through challenges and difficulties, and is strengthened or weakened by her generous or ungenerous responses respectively. Thus, although it is natural for us to think of a virtue as a disposition, we should be careful not to confuse this with the scientific notion of disposition, which just is a static lasting tendency…

    ….

    A virtue is also a reliable disposition. If Jane is generous, it is no accident that she does the generous action and has generous feelings. We would have been surprised, and shocked, if she had failed to act generously, and looked for some kind of explanation. Our friends’ virtues and vices enable us to rely on their responses and behaviour—to a certain extent, of course, since none of us is virtuous enough to be completely reliable in virtuous response and action.

    ….

    Further, a virtue is a disposition which is characteristic—that is, the virtuous (or vicious) person is acting in and from character when acting in a kindly, brave or restrained way. This is another way of putting the point that a virtue is a deep feature of the person. A virtue is a disposition which is central to the person, to whom he or she is, a way we standardly think of character. I might discover that I have an unsuspected talent for Sudoku, but this, although it enlarges my talents, does not alter my character. But someone who discovers in himself an unsuspected capacity to feel and act on compassion, and who develops this capacity, does come to change as a person, not just in some isolated feature; he comes to have a changed character.

    Virtue ethics, then, is centered around something that is roughly this concept. Note that any plausible theory is going to incorporate all of the concepts we’ve gone over on normative ethics. We can go back to Kagan’s Normative Ethics from above, where he notes the relevancy of consequences in every theory.

    all plausible theories agree that goodness of consequences is at least one factor relevant to the moral status of acts. (No plausible theory would hold, for example, that it was irrelevant whether an act would lead to disaster!)

    Similarly, other theories will have an account of virtue, as Jason Kawall’s In Defense of the Primacy of the Virtues briefly describes:

    Consequentialists will treat the virtues as character traits that serve to maximize (or produce sufficient quantities of) the good, where the good is taken as explanatorily basic. Deontologists will understand the virtues in terms of dispositions to respect and act in accordance with moral rules, or to perform morally right actions, where these moral rules or right actions are fundamental. Furthermore, the virtues will be considered valuable just insofar as they involve such tendencies to maximize the good or to perform right actions.

    So it is important to stress then that virtue is the central concept for virtue ethics, and is no more simply a theory that makes relevant an account of virtue any more than consequentialism is any theory that makes relevant an account of consequences. One way we can come to understand virtue ethics better is by understanding a specific kind of virtue ethics, theories which satisfying four conditions laid out by Kawall:

    (i) The concepts of rightness and goodness would be explained in terms of virtue concepts (or the concept of a virtuous agent).

    (ii) Rightness and goodness would be explained in terms of the virtues or virtuous agents.

    (iii) The explanatory primacy of the virtues or virtuous agents (and virtue concepts) would reflect a metaphysical dependence of rightness and goodness upon the virtues or virtuous agents.

    (iv) The virtues or virtuous agents themselves – as well as their value – could (but need not) be explained in terms of further states, such as health, eudaimonia, etc., but where these further states do not require an appeal to rightness or goodness.

    It should be emphasized again that this describes only some theories in this family, but they are good theories to focus on because much of the discussion around these theories would be representative of discussion around virtue ethics in general.

    It is worth stressing that not all theories that could plausibly be understood as forms of virtue ethics would satisfy the above conditions; the current goal is not to defend all possible virtue ethics. Rather, we are examining what might be taken to be among the more radical possible forms of virtue ethics, particularly in treating the virtues as explanatorily prior both to rightness and to goodness tout court. Why focus on these more radical forms? First, several prominent virtue ethics can be understood as satisfying the above conditions, including those of Michael Slote, Linda Zagzebski, and, perhaps (if controversially), Aristotle’s paradigmatic virtue ethics. Beyond this, many of the arguments presented here could be taken on board by those defending more moderate forms of virtue ethics, such as Rosalind Hursthouse or Christine Swanton (against those who would attempt to argue for the explanatory primacy of the right or of the good, for example). Thus the range of interest for most of these arguments will extend beyond those focusing on the more radical approaches.

    Practical intelligence

    Practical intelligence can be described much more briefly to get a sense of its meaning across. In Rosalind Hursthouse’s Applying Virtue Ethics to Our Treatment of the Other Animals, we get a brief description of the role of practical intelligence.

    Of course, applying the virtue and vice terms correctly may be difficult; one may need much practical wisdom to determine whether, in a particular case, telling a hurtful truth is cruel or not, for example…

    Julia Annas elaborates to greater detail in “Intelligent Virtue”:

    The way our characters develop is to some extent a matter of natural endowment; some of us have traits ‘by nature’—we will tend to act bravely or generously without having to learn to do so, or to think about it. This is ‘natural virtue’, which we have already encountered. Different people will have different natural virtues, and one person may be naturally endowed in one area of life but not others—naturally brave, for example, but not naturally generous. However, claims Aristotle, this can’t be the whole story about virtue. For one thing, children and animals can have some of these traits, but in them they are not virtues. Further, these natural traits are harmful if not guided by ‘the intellect’, which in this context is specified as practical wisdom or practical intelligence (phronesis). Just as a powerfully built person will stumble and fall if he cannot see, so a natural tendency to bravery can stumble unseeingly into ethical disaster because the person has not learned to look out for crucial factors in the situation. Our natural practical traits need to be formed and educated in an intelligent way for them to develop as virtues; a natural trait may just proceed blindly on where virtue would respond selectively and in a way open to novel information and contexts.

    Ethics as maximizing happiness: Utilitarianism

    In the famous Trolley problem philosopher Philippa Foot introduced in the 1960s, you have the ability to pull a lever to divert a train from running over five tied-up people lying on the tracks. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track.

    According to classical utilitarianism, pulling the lever would be permissible and more moral. English philosophers Jeremy Bentham and John Stuart Mill introduced utilitarianism as the sole moral obligation to maximize happiness. As an alternative to divine, religious theories of ethics. Utilitarianism suffers from the idea of “utility monsters,” individuals who would have much more happiness (and therefore utility) than average. This would cause actions to skew towards and exploit maximizing the monster’s happiness in such a way that others would suffer. Since philosopher Robert Nozick introduced the “utility monster” idea in 1974, it has been discussed in politics as driving the ideas of special interest groups and free speech – as though securing these interests would serve the interests of the few experiencing much more happiness than the general population.

    Are these taxonomic imperfections bad? How do we get over vague definitions?

    It might be tempting to read all of this and think there’s some sort of difficulty in discussing normative ethics. In general, academic discourse does not hinge on definitions, and so definitions are not a very large concern. And yet, it might appear upon reading this that ethics is some sort of exception. When philosophers talk about adaptationism in evolution or causation in metaphysics, the definitions they provide seem a lot more precise, so why is ethics an exception?

    The answer is uninterestingly that ethics is not an exception. It is important to avoid confusing what has been read here as some sort of fundamental ambiguity in these theories. Consider Brown’s motive for resisting consequentialization as a response to Dreir’s motive for consequentialization.

    I’ll close by drawing out another moral of my conclusion, related to something Dreier says. Dreier’s motivation for consequentializing is that he wants to overcome a certain “stigma” which he says afflicts defenders of “common sense morality” when they try to deny consequentialism. To deny consequentialism, he says, they must claim that we are sometimes required to do less good than we might, but that claim has a “paradoxical air.” So defenders of commonsense morality, who deny consequentialism, are stigmatized as having a seemingly paradoxical position.

    ….

    Dreier thinks the way to avoid the stigma is to avoid denying consequentialism. If we consequentialize commonsense morality, then defenders of commonsense morality need not deny consequentialism. If I’m right, however, this way of avoiding the stigma doesn’t work…

    Note that this is entirely orthogonal to the plausibility of any particular theory. Whatever stigmas exist makes no difference on whether or not some particular theory happens to be correct. It may prove useful to helping beginners gain a sense of what they’re talking about, but beyond pedagogical utility, it’s disputed that this distinction actually tells us, at a very fundamental level, what these theories are all about.

    In Michael Ridge’s Reasons for Action: Agent-Neutral vs. Agent-Relative, Ridge points out one of the alternative distinctions that might have a more prominent role in describing what fundamentally distinguishes these theories.

    The agent-relative/agent-neutral distinction is widely and rightly regarded as a philosophically important one.

    ….

    The distinction has played a very useful role in framing certain interesting and important debates in normative philosophy.

    For a start, the distinction helps frame a challenge to the traditional assumption that what separates so-called consequentialists and deontologists is that the former but not the latter are committed to the idea that all reasons for action are teleological. A deontological restriction forbids a certain sort of action (e.g., stealing) even when stealing here is the only way to prevent even more stealing in the long run. Consequentialists charge that such a restriction must be irrational, on the grounds that if stealing is forbidden then it must be bad but if it is bad then surely less stealing is better than more. The deontologist can respond in one of two ways. First, they could hold that deontological restrictions correspond to non-teleological reasons. The reason not to steal, on this account, is not that stealing is bad in the sense that it should be minimized but rather simply that stealing is forbidden no matter what the consequences (this is admittedly a stark form of deontology, but there are less stern versions as well). This is indeed one way of understanding the divide between consequentialists and deontologists, but the agent-relative/agent-neutral distinction, and in particular the idea of agent-relative reasons, brings to the fore an alternative conception. For arguably, we could instead understand deontological restrictions as corresponding to a species of reasons which are teleological after all so long as those reasons are agent-relative. If my reason not to steal is that I should minimize my stealing then the fact that my stealing here would prevent five other people from committing similar acts of theft does nothing to suggest that I ought to steal.

    ….

    If Dreier is right [that in effect we can consequentialize deontology] then the agent-relative/agent-neutral distinction may be more important than the distinction between consequentialist theories and non-consequentialist theories.

    The section goes on to detail several ways we can look at this issue so we can understand the importance of this distinction and what it can tell us about the structure and plausibility of certain theories. So while the typical division between consequentialist, deontological, and virtue ethical theories can be superficially valuable to those getting into ethics, it is important to not overstate the significance of these families and their implications.

    Reading

    Normative ethics

    Includes a minimal definition of normative ethics as a whole.

    In this entry, Ridge lays out another way of categorizing theories in normative ethics in an accessible manner.

    Issues in normative ethics

    • Christopher Heathwood Welfare. 2010.
    • Roger Crisp Stanford Encyclopedia of Philosophy entry on Well-being. 2017.
    • Michael Zimmerman Stanford Encyclopedia of Philosophy entry on Intrinsic vs. Extrinsic Value. 2014.
    • Dana Nelkin Stanford Encyclopedia of Philosophy entry on Moral Luck. 2013.
    • Stephen Stich, John Doris, and Erica Roedder Altruism. 2008.
    • Robert Shaver Stanford Encyclopedia of Philosophy entry on Egoism. 2014.
    • Joshua May Internet Encyclopedia of Philosophy entry on Psychological Egoism. 2011.

    Consequentialism

    About the best introduction that one can find to one of the consequentialist theories: utilitarianism.

    An introduction to the debate over utilitarianism.

    An influential work that lays out a decent strategy for keeping consequentialist theories of ethics distinct from other theories.

    • Walter Sinnott-Armstrong’s Stanford Encyclopedia of Philosophy entry on Consequentialism. 2015. A
    • William Haines Internet Encyclopedia of Philosophy entry on Consequentialism. 2006.
    • Chapter 3 and 4 of Driver (see above). 2006.

    Deontology

    A good introduction to and strong defense of Kantianism.

    Rawls’s revolutionary work in both ethics and political philosophy in which he describes justice as fairness, a view he would continue to develop later on.

    A significant improvement and defense of one of the most influential deontological alternatives to Kantianism: Rossian deontology.

    Scanlon, one of the most notable contributors to political and ethical philosophy among his contemporaries, provides an updated and comprehensive account of his formulation of contractualism.

    • Larry Alexander and Michael Moore Stanford Encyclopedia of Philosophy entry on Deontological Ethics. 2016.
    • Chapter 5 and 6 of Driver (see above). 2006.

    Virtue ethics

    Hursthouse’s groundbreaking and accessible work on virtue theory.

    Meta-ethics (Metaethics)

    This is probably a more difficult read than the others, but it is incredibly comprehensive and helpful. There are many things in this handbook that I’ve been reading about for a long time that I didn’t feel confident about until reading this. Certainly worth the cost.

    Moral judgement

    A must read for those who want to engage with issues in moral judgment, functioning both as a work popularly considered the most important in the topic as well as a great introduction.

    • Chapter 3 of Miller (see above). 2013.
    • Connie S. Rosati Stanford Encyclopedia of Philosophy entry on Moral Motivation. 2016.

    Moral responsibility

    Moral realism and irrealism

    A very popular Philosophy Compass paper that lays out very simply what moral realism is without arguing for or against any position.

    An obligatory text laying out the popular companions in guilt argument for moral realisms.

    • Smith (see above). 1998.
    • Enoch (see above). 2011.
    • Chapter 8, 9, and 10 of Miller (see above). 2013.
    • Shafer-Landau (see above). 2005.
    • Katia Vavova Debunking Evolutionary Debunking. 2013.

    Here, Vavova provides a very influential, comprehensive, and easy to read overview of evolutionary debunking arguments, in which she also takes the liberty of pointing out their flaws.

    Korsgaard’s brilliant description, as well as her defense, of a form of Kantian constructivism.

    Research Ethics

    Websites

    National Center for Professional and Research Ethics (NCPRE) – https://www.nationalethicscenter.org/

    National Science Foundation Office of Inspector General – http://www.nsf.gov/oig/index.jsp

    Office for Human Research Protections (OHRP) – http://www.hhs.gov/ohrp/

    Office of Research Integrity (ORI) – http://ori.dhhs.gov/

    Online Ethics Center for Engineering and Research – http://onlineethics.org/

    Project for Scholarly Integrity – http://www.scholarlyintegrity.org/

    Resources for Research Ethics Education – http://research-ethics.net/

    Email lists

    RCR-Instruction, Office of Research Integrity – send a request to askori@hhs.gov to subscribe

    Journals

    Accountability in Research – http://www.tandf.co.uk/journals/titles/08989621.asp

    Ethics and Behavior – http://www.tandf.co.uk/journals/titles/10508422.asp

    Journal of Empirical Research on Human Research Ethics – http://www.ucpressjournals.com/journal.asp?j=jer

    Science and Engineering Ethics – http://www.springer.com/philosophy/ethics/journal/11948#8085218705268172855

    News publications

    The Chronicle of Higher Education – http://www.chronicle.com/

    Nature – http://www.nature.com/

    Science – http://www.sciencemag.org/

    The Scientist – http://www.thescientist.com

    Ethical theory

    Frankena, William K. 1988. Ethics. 2nd ed. Prentice-Hall, Inc.

    Rachels, James, and Stuart Rachels. 2009. The Elements of Moral Philosophy. 6th ed. McGraw-Hill Companies.

    Books

    Beach, Dore. 1996. Responsible Conduct of Research. John Wiley & Sons, Incorporated.

    Bebeau, Muriel J., et al. 1995. Moral Reasoning in Scientific Research: Cases for Teaching and Assessment. Poynter Center for the Study of Ethics and American Institutions. Source: Order or download in PDF format at http://poynter.indiana.edu/mr/mr-main.shtml.

    Bulger, Ruth Ellen, Elizabeth Heitman, and Stanley Joel Reiser, eds. 2002. The Ethical Dimensions of the Biological and Health Sciences. 2nd ed. Cambridge University Press.

    Elliott, Deni, and Judy E. Stern, eds. 1997. Research Ethics: A Reader. University Press of New England. See also Stern and Elliott, The Ethics of Scientific Research.

    Erwin, Edward, Sidney Gendin, and Lowell Kleiman, eds. 1994. Ethical Issues in Scientific Research: An Anthology. Garland Publishing.

    Fleddermann, Charles B. 2007. Engineering Ethics. 3rd ed. Prentice Hall.

    Fluehr-Lobban, Carolyn. 2002. Ethics and the Profession of Anthropology: Dialogue for Ethically Conscious Practice. 2nd ed. AltaMira Press.

    Goodstein, David L. 2010. On Fact and Fraud: Cautionary Tales from the Front Lines of Science. Princeton University Press.

    Harris, Charles E., Jr., Michael S. Pritchard, and Michael J. Rabins. 2008. Engineering Ethics: Concepts and Cases. 4th edition. Wadsworth.

    Israel, Mark, and Iain Hay. 2006. Research Ethics for Social Scientists: Between Ethical Conduct and Regulatory Compliance. SAGE Publications, Limited.

    Johnson, Deborah G. 2008. Computer Ethics. 4th ed. Prentice Hall PTR.

    Korenman, Stanley G., and Allan C. Shipp. 1994. Teaching the Responsible Conduct of Research through a Case Study Approach: A Handbook for Instructors. Association of American Medical Colleges. Source: Order from http://www.aamc.org/publications/

    Loue, Sana. 2000. Textbook of Research Ethics: Theory and Practice. Springer.

    Macrina, Francis L. 2005. Scientific Integrity: Text and Cases in Responsible Conduct of Research. 3rd ed. ASM Press.

    Miller, David J., and Michel Hersen, eds. 1992. Research Fraud in the Behavioral and Biomedical Sciences. John Wiley & Sons, Incorporated.

    Murphy, Timothy F. 2004. Case Studies in Biomedical Research Ethics. MIT Press.

    National Academy of Sciences. 2009. On Being a Scientist: A Guide to Responsible Conduct in Research. 3rd edition. National Academy Press. Source: Order from http://www.nap.edu/catalog.php?record_id=12192

    National Academy of Sciences. 1992. Responsible Science, Vol. 1: Ensuring the Integrity of the Research Process. Source: Order from http://www.nap.edu/catalog.php?record_id=1864

    National Academy of Sciences. 1992. Responsible Science, Vol. 2: Background Papers and Resource Documents. Source: Order from http://www.nap.edu/catalog.php?record_id=2091

    Oliver, Paul. 2010. The Students’ Guide to Research Ethics. 2nd ed. McGraw-Hill Education.

    Orlans, F. Barbara, et al., eds. 2008. The Human Use of Animals: Case Studies in Ethical Choice. 2nd ed. Oxford University Press.

    Penslar, Robin Levin, ed. 1995. Research Ethics: Cases and Materials. Indiana University Press.

    Resnik, David B. 1998. The Ethics of Science: An Introduction. Routledge.

    Schrag, Brian, ed. 1997-2006. Research Ethics: Cases and Commentaries. Seven volumes. Association for Practical and Professional Ethics. Source: Order from http://www.indiana.edu/~appe/publications.html#research.

    Seebauer, Edmund G., and Robert L. Barry. 2000. Fundamentals of Ethics for Scientists and Engineers. Oxford University Press.

    Seebauer, Edmund G.. 2000. Instructor’s Manual for Fundamentals of Ethics for Scientists and Engineers. Oxford University Press.

    Shamoo, Adil E., and David B. Resnik. 2009. Responsible Conduct of Research. Oxford University Press.

    Shrader-Frechette, Kristin S. 1994. Ethics of Scientific Research. Rowman & Littlefield Publishers, Inc.

    Sieber, Joan E. 1992. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. SAGE Publications, Inc.

    Sigma Xi. 1999. Honor in Science. Sigma Xi, the Scientific Research Society. Source: Order from http://www.sigmaxi.org/resources/merchandise/index.shtml

    Sigma Xi. 1999. The Responsible Researcher: Paths and Pitfalls. Sigma Xi, the Scientific Research Society. Source: Order from http://www.sigmaxi.org/resources/merchandise/index.shtml or download in PDF format at http://sigmaxi.org/programs/ethics/ResResearcher.pdf

    Steneck, Nicholas H. 2007. ORI Introduction to the Responsible Conduct of Research. Revised ed. DIANE Publishing Company. Source: Order from http://bookstore.gpo.gov/collections/ori-research.jsp or download in PDF format at http://ori.dhhs.gov/publications/oriintrotext.shtml.

    Stern, Judy E., and Deni Elliott. 1997. The Ethics of Scientific Research: A Guidebook for Course Development. University Press of New England. See also Elliott and Stern, eds., Research Ethics: A Reader.

    Vitelli, Karen D., and Chip Colwell-Chanthaphonh, eds. 2006. Archaeological Ethics. 2nd ed. AltaMira Press.

A Comparison of Copper in the U.S.

As humanity’s oldest metal, copper comes in many forms. People have used copper for thousands of years. When the ancient Romans mined the element “cyprium” from Cyprus, the metal soon became known in English as “copper.” 

Copper is produced and consumed in many forms, from the lining of electrical motors to the coating of pennies. Thanks to its high thermal and electrical conductivity, the material is frequently used in telecommunication technologies and as a building material.

The process of copper production includes mining, refining, smelting, and electrowinning. Through smelting and electrolytic refining, engineers and scientists transform mined ores to copper cathodes. Cathodes are thin sheets of pure copper used as raw material for processing the metal into high-quality products. 

Using data available to the public from the U.S. Geological Survey, the copper market has changed to society’s needs over the past years. 

The four major types of copper are mined copper, secondary copper, refined copper and refined electrowon copper. Secondary copper comes from recycled and scrap materials such as tubes, sheets, cables, radiators and castings, as well as from residues like dust or slag. 

Engineers and scientists transform mined pure copper metal and copper from concentrated low-grade ores through smelting and electrolytic refining in creating copper cathodes. Acid leaching of oxidized ores produces more copper.

Thanks to the chemical and physical properties of copper, the material is suitable for electrical and thermal conductivity. Copper’s high ductility and malleability give it key roles in industrial applications of coil wining, power transmission and generation and telecommunication technologies.

The different methods of processing copper have remained constant for the most part between 1990 and 2010. The data is from “U.S. Mineral Dependence—Statistical Compilation of U.S. and World Mineral Production, Consumption, and Trade, 1990–2010” by James J. Barry, Grecia R. Matos and W. David Menzie. The rise in refined copper reflects market trends for the rising demand for refined copper, according to a report in Mining.com. Oxide and sulfur ores generally have between 0.5 and 2.0% copper. The process involves concentrating the ore to remove gangue and other materials.


Differences between reported and apparent processed copper consumption in the U.S. have decreased from 2005 to 2009. Copper consumption itself has dropped.

The various types of copper produced by the U.S. have remained constant over the time period. 

Mined copper has remained the dominant copper produced around the world, though refined copper has come close or equal to it from 1996 to 2001. Refined electrowon copper has steadily surpassed secondary copper over the time period, too. 

The epistemology and metaphysics of causality

The epistemology of causality

There are two epistemic approaches to causal theory. Under a hypothetico-deductive account, we hypothesize causal relationships and deduce predictions based on them. We test these hypotheses and predictions by comparing empirical phenomena and other knowledge and information on what actually happens to these theories. We may also take an inductive approach in which we make a large number of appropriate, justified observations (such as a set of data) from which we can induce causal relationships directly from them.

Hypothetico-Deductive discovery

The testing phase of this account of discovery and causality uses the views on the nature of causality to determine whether we support or refute the hypothesis. We search for physical processes underlying the causal relationships of the hypothesis. We can use statistics and probability to determine which consequences of hypotheses are verified, like comparing our data to a distribution such as a Gaussian or Dirichlet one. We can further probe these consequences on a probabilistic level and show that changing hypothesized causes can predict, determine, or guarantee effects.

Philosopher Karl Popper advocated this approach for causal explanations of events that consist of natural laws, which are universal statements about the world. He designated initial conditions, single-case statements, from which we may deduce outcomes and form predictions of various events. These case initial conditions call for effects that we can determine, such as whether a physical system will approach thermodynamic equilibrium or how a population might evolve under the influence of predators or external forces. Popper delineated the method of hypothesizing laws, deducing their consequences, and rejecting laws that aren’t supported as a cyclical process. This is the covering-law account of causal explanation.

Inductive learning

Philosopher Francis Bacon promoted the inductive account of scientific learning and reasoning. From a very high number of observations of some phenomenon or event with experimental, empirical evidence where it’s appropriate, we can compile a table of positive instances (in which a phenomenon occurs), negative instances (it doesn’t occur), and partial instances (it occurs to a certain degree). This gives a multidimensionality to phenomena that characterize causal relationships from both a priori and a posterior perspectives.

Inductivist artificial intelligence (AI) approaches have in common the feature that causal relationships can be determined from statistical relationships. We assume the Causal Markov condition holds of physical causality and physical probability. This Causal Markov Condition plays a significant deterministic role in the various features of the model and the events or phenomena it predicts. A causal net must have the Causal Markov Condition as an assumption or premise. For structural equation models (SEM), Causal Markov Conditions result from representations of each variable as a function of its direct causes and an associated error variable with it. We assume probabilistic independence of each error variable. We then find the class of causal models or a single best causal model with probabilistic independences that are justified by the Causal Markov Condition. They should be consistent with independences we can infer from the data, and we might also make further assumptions about the minimality (no submodel of the causal model also satisfied the Causal Markov Condition), faithfulness (all independences in the data are implied via the Causal Markov Condition), linearity (all variables are linear functions of their direct causes and uncorrelated error variables). We may also define causal sufficiency, whether all common causes of measured variables are measured, and context generality, every individual or node in the model has causal relations of the population. These two features let us describe models and methods of scientific reasoning as causal in nature and, from there, we may apply appropriate causal models such as Bayesian, frequentist, or similar methods of prediction. We may even illustrate a causal diagram or model elements under various conditions such as those given by independence or constraints on variables.

This way, in the intercorrelatedness of the graph or model, we can’t change the value of a variable without affecting the way it relates to other variables, but there may conditions in which we construct models that have autonomous nodes or variables. The way these features and claims of inductivist AI interact with another is subject to debate by the underlying assumptions, justification, and methods of reasoning behind these models.

Metaphysics of causality

We can pose questions about the mathematization of causality even with the research and methods that have dominated the work on probability and its consequences. We can speculate what causality is and the opinions on the nature of causality as they relate to the axioms and definitions that have remained stable in the theories of probability and statistics.

We can elaborate three types of causality approaches. The first is that causality is only a heuristic and has no role in scientific reasoning and discourse, as philosopher Bertrand Russel argued. Science depends upon functional relationships, not causal laws. The second position is that causality is a fundamental feature of the world, a universal principle. We should, therefore, treat it as a scientific primitive. This position evolved out of conflict with purported philosophical analyses that appealed to asymmetry of time (that it moves in one direction) to explain the asymmetry of causation (that they move in one direction and one direction only). This raises concerns of how to interpret time in terms of causality. The third is we can reduce causal relations to other concepts that don’t involve causal notions. Many philosophers support this position, and, as such, there are four divisions within this position.

The first schism we discuss is that causality is a relation between variables that are single-case or repeatable according to the interpretation of causality in question. We interpret causality as a mental in nature given that causality is a feature of an agent’s epistemic state and physical if it’s a feature of the external world. We interpret it as subjective if two agents with the same relevant knowledge can disagree on a conclusion of the relationships with both positions correct, as though they were a matter of arbitrary choice. Otherwise we interpret it as objective. The subjective-objective schism raises issues between how different positions would be regarded as correct and what determines the subjective element or role subjectivity plays in these two different positions.

The second partition is the mechanistic account of causality – that physical processes link cause and effect. We interpret causal statements as giving information about these processes. Philosophers Wesley Salmon and Phil Dowe advocate this position as they argue causal processes transmit or have a conserved physical quantity to them. We may describe the relation between energy and mass (E = mc²) as causal relations from start (cause) to a finish (effect). One may argue against this position on the grounds that these relations in science have no specific direction one way or another and are symmetrical and not subject to causality. It does, however, relate single cases linked by physical processes even if we can induce causal regularities or laws from these connections in an objective manner. If two people disagree on the causal connections, one or both are wrong.

This approach is difficult to apply. The physics of these quantities aren’t determined by the causal relations themselves. The conservation of these physical quantities may suggest causal links to physicists, they aren’t relevant in the fields that emerge from physics such as chemistry or engineering. This would lead one to believe the epistemology of the causal concepts are irrelevant to their metaphysics. If this were the case, the knowledge of a causal relationship would have little to do with the causal connection itself.

The third subdivision is probabilistic causality in which we treat causal connections with probabilistic relationships of variables. We can debate which probabilistic relationships among variables of probabilistic causality determine or create causal relationships. One might say the Principle of Common Cause (if two variables are probabilistically dependent, then one causes the other or they’re effects of common causes that make them independent from one another). Philosopher Hans Reichenbach applied this to causality to provide a probabilistic analysis of time in its single direction. More recent philosophers use the Causal Markov Condition as a necessary condition for causality with other less central conditions. We normally apply probabilistic causality to repeatable variables such that probability handles them, but critics may argue the Principle of the Common Cause and the Causal Markov Conditions have counterexamples showing they don’t hold in under all conditions.

Finally, the fourth subclass is the counterfactual account, as advocated by philosopher David Lewis. In this way, we reduce causal relations to subjunctive conditions such that an effect depends causally on a cause if and only iff (1) if the cause were to occur, then the effect would occur (or its chance to occur would raise significantly) and (2) if the cause didn’t occur then the effect wouldn’t occur. The transitive closure of the Causal Depedendence (that a cause will either increase the probability of a direct effect or, if it’s a preventative, make the effect less likely, as long as the effect’s other direct causes are held fixed) holds. The causal relationships are what goes on in possible worlds that are similar to our own. Lewis introduced counterfactual theory to account of the causal relationships between single-case events and causal relationships that are mind-independent and objective. We may still press this account by arguing that we have no physical contact with these possible worlds or that there isn’t an objective way to determine which worlds are closest to our own or which worlds we should follow and analyze in determining causality. The counterfactualist may respond that the worlds we choose are the ones in which the cause-and-effect relationship occurs as closer to our own world and, from there, determine which appropriate world is closest to our own.