The Emergent Beauty of Mathematics

The first 30 seconds of a Brownian tree (code can be found here).

Like a flower in full bloom, nature reveals its patterns shaped by mathematics. Or as particles collide with one another, they create snowflake-like fractals through emergence. How do fish swarm in schools or consciousness come about from the brain? Simulations can provide answers.

Through code, you can simulate how living cells or physical particles would interact with one another. Using equations that govern the behavior of how cells act when they meet one another or how they would grow and evolve into larger structures. In the gif above, you can use diffusion-limited aggregation to create Brownian trees. These are the structures that emerge when particles move randomly with respect to one another. Particles in fluid (like dropping color dye into water) take these patterns when you look at them under a microscope. As the particles collide and form trees, they create shapes and patterns like water crystals on glass. These visuals can give you a way of appreciating how beautiful mathematics is. The way mathematical theory can borrow from nature and how biological communities of living organisms themselves depend on physical laws shows how such an interdisciplinary approach provides a way to bridge different disciplines.

After about 20 minutes, the branches of the Brownian tree take form.

In the code, the particles are set to move with random velocities in two dimensions and, if they collide with the tree (a central particle at the beginning), they form parts of the tree. As the tree grows bigger over time, it takes the shapes of branches the same way neurons in the brain form trees that send signals between one another. These fractals, in their uniqueness, give them a kind of mathematical beauty.

Conway’s game of life represents another way something emerges from randomness.

Flashing lights coming and going away like stars shining in the sky are more than just randomness. These models of cellular interactions are known as cellular automaton. The gif above shows an example of Conway’s game of life, a simulation of how living cells interact with one another.

These cells “live” and “die” according to four simple rules: (1) live cells with fewer than two live neighbors die, as if by underpopulation, (2) live cells with two or three live neighbors live on to the next generation, (3) live cells with more than three live neighbors die, as if by overpopulation and (4) dead cells with exactly three live neighbors become live cells, as if by reproduction.

Conus textile shows a similar cellular automaton pattern on its shell.

Through these rules, specific shapes emerge such as “gliders” or “knightships” you can further describe with rules and equations. You’ll find natural versions of cells obeying rules like the colorful patterns on a seashell. Complex structures emerging from more basic, fundamental sets of rules unite these observations. While the beauty of these structures becomes more and more apparent from the patterns between different disciplines, searching for these patterns in other contexts can be more difficult such as human behavior.

Recent writing like Genesis: The Deep Origin of Societies by biologist E.O. Wilson take on the debate over how altruism in humans evolved. While the shape of snowflakes can emerge from the interactions between water molecules, humans getting along with one another seems far more complicated and higher-level. Though you can find similar cooperation in ants and termites creating societies, how did science let this happen?

Biologists have answered that organisms choose to mate with individuals and increase the survival chances of themselves and their offspring while passing on their genes. Though they’ve argued this for decades, Wilson offers a contrary point of view. In groups, selfish organisms defeat altruistic ones, but altruistic groups beat selfish groups overall. This group selection drives the emergence of altruism. Through these arguments, both sides have appealed to the mathematics of nature, showing its growing importance in recognizing the patterns of life.

Wilson clarifies that data analysis and mathematical modeling should come second to the biology itself. Becoming experts on organisms themselves should be a priority. Regardless of what form it takes, the beauty is still there, even if it’s below the surface.

How to Create Interactive Network Graphs (from Twitter or elsewhere) with Python

In this post, a gentle introduction to different Python packages will let you create network graphs users can interact with. Taking a few steps into graph theory, you can apply these methods to anything from links between the severity of terrorist attacks or the prices of taxi cabs. In this tutorial, you can use information from Twitter to make graphs anyone can appreciate.

The code for steps 1 and 2 can be found on GitHub here, and the code for the rest, here.

Table of contents:

  1. Get Started
  2. Extract Tweets and Followers
  3. Process the Data
  4. Create the Graph
  5. Evaluate the Graph
  6. Plot the Map

1. Get Started

Make sure you’re familiar with using a command line interface such as Terminal and you can download the necessary Python packages (chart-studio, matplotlib, networkx, pandas, plotly and python-twitter). You can use Anaconda to download them. This tutorial will introduce parts of the script you can run from the command line to extract tweets and visualize them.

If you don’t have a Twitter developer account, you’ll need to login here and get one. Then create an app and find your keys and secret codes for the consumer and access tokens. This lets you extract information from Twitter.

2. Extract Tweets and Followers

To extract Tweets, run the script below. In this example, the tweets of the UCSC Science Communication class of 2020 are analyzed (in screennames) so their Twitter handles are used. Replace the variables currently defined as None below with them. Keep these keys and codes safe and don’t share them with others. Set datadir to the output directory to store the data.

The code begins with import statements to use the required packages including json and os, which should come installed with Python.

import json
import os
import pickle
import twitter 

screennames = ["science_ari", "shussainather", "laragstreiff",                  "scatter_cushion", "jessekathan", "jackjlee",                 "erinmalsbury", "joetting13", "jonathanwosen",                 "heysmartash"] 

CONSUMER_KEY = None
CONSUMER_SECRET = None
ACCESS_TOKEN_KEY = None
ACCESS_TOKEN_SECRET = None

datadir = "data/twitter"

Extract the information we need. This code goes through each screen name and accesses their tweet and follower information. It then saves the data of both of them to output JSON and pickle files.

t = twitter.Api(consumer_key = CONSUMER_KEY,
consumer_secret = CONSUMER_SECRET,
access_token_key = ACCESS_TOKEN_KEY,
access_token_secret = ACCESS_TOKEN_SECRET)
for sn in screennames:
"""
For each user, get the followers and tweets and save them
to output pickle and JSON files.
"""
fo = datadir + "/" + sn + ".followers.pickle"
# Get the follower information.
fof = t.GetFollowers(screen_name = sn)
with open(fo, "w") as fofpickle:
pickle.dump(fof, fofpickle, protocol = 2)
with open(fo, "r") as fofpickle:
with open(fo.replace(".pickle", ".json"), "w") as fofjson:
fofdata = pickle.load(fofpickle)
json.dump(fofdata, fofjson) # Get the user's timeline with the 500 most recent tweets.
timeline = t.GetUserTimeline(screen_name=sn, count=500)
tweets = [i.AsDict() for i in timeline]
with open(datadir + "/" + sn + ".tweets.json", "w") as tweetsjson:
json.dump(tweets, tweetsjson) # Store the informtion in a JSON.

This should extract the follower and tweets and save them to pickle and JSON files in the datadir.

3. Process the Data

Now that you have an input JSON file of tweets, you can set it to the tweetsjson variable in the code below to read it as a pandas DataFrame.

For the rest of the tutorial, start a new script for convenience.

import json
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
import re

from plotly.offline import iplot, plot
from operator import itemgetter

Use pandas to import the JSON file as a pandas DataFrame.

df = pd.read_json(tweetsjson)

Set tfinal as the final DataFrame to make.

tfinal = pd.DataFrame(columns = ["created_at", "id", "in_reply_to_screen_name", "in_reply_to_status_id", "in_reply_to_user_id", "retweeted_id", "retweeted_screen_name", "user_mentions_screen_name", "user_mentions_id", "text", "user_id", "screen_name", "followers_count"])

Then, extract the columns you’re interested in and add them to tfinal.

eqcol = ["created_at", "id", "text"]
tfinal[eqcol] = df[eqcol]
tfinal = filldf(tfinal)
tfinal = tfinal.where((pd.notnull(tfinal)), None)

Use the following functions to extract information from them. Each function extracts information form the input df DataFrame and adds it to the tfinal one.

First, get the basic information: screen name, user ID and how many followers.

def getbasics(tfinal):
"""
Get the basic information about the user.
"""
tfinal["screen_name"] = df["user"].apply(lambda x: x["screen_name"])
tfinal["user_id"] = df["user"].apply(lambda x: x["id"])
tfinal["followers_count"] = df["user"].apply(lambda x: x["followers_count"])
return tfinal

Then, get information on which tweets have been retweeted.

def getretweets(tfinal):
"""
Get retweets.
"""
# Inside the tag "retweeted_status" will find "user" and will get "screen name" and "id".
tfinal["retweeted_screen_name"] = df["retweeted_status"].apply(lambda x: x["user"]["screen_name"] if x is not np.nan else np.nan)
tfinal["retweeted_id"] = df["retweeted_status"].apply(lambda x: x["user"]["id_str"] if x is not np.nan else np.nan)
return tfinal

Figure out which tweets are replies and to who they are replying.

def getinreply(tfinal):
"""
Get reply info.
"""
# Just copy the "in_reply" columns to the new DataFrame.
tfinal["in_reply_to_screen_name"] = df["in_reply_to_screen_name"]
tfinal["in_reply_to_status_id"] = df["in_reply_to_status_id"]
tfinal["in_reply_to_user_id"]= df["in_reply_to_user_id"]
return tfinal

The following function runs each of these functions to get the information into tfinal.

def filldf(tfinal):
"""
Put it all together.
"""
getbasics(tfinal)
getretweets(tfinal)
getinreply(tfinal)
return tfinal

You’ll use this getinteractions() function in the next step when creating the graph. This takes the actual information from the tfinal DataFrame and puts it into the format that a graph can use.

def getinteractions(row): """ Get the interactions between different users. """ # From every row of the original DataFrame. # First we obtain the "user_id" and "screen_name". user = row["user_id"], row["screen_name"] # Be careful if there is no user id. if user[0] is None: return (None, None), []

For the remainder of the for loop, get the information if it’s there.

    # The interactions are going to be a set of tuples.
    interactions = set()

    # Add all interactions. 
    # First, we add the interactions corresponding to replies adding 
    # the id and screen_name.
    interactions.add((row["in_reply_to_user_id"], 
    row["in_reply_to_screen_name"]))
    # After that, we add the interactions with retweets.
    interactions.add((row["retweeted_id"], 
    row["retweeted_screen_name"]))
    # And later, the interactions with user mentions.
    interactions.add((row["user_mentions_id"], 
    row["user_mentions_screen_name"]))

    # Discard if user id is in interactions.
    interactions.discard((row["user_id"], row["screen_name"]))
    # Discard all not existing values.
    interactions.discard((None, None))
    # Return user and interactions.
    return user, interactions

4. Create the Graph

Initialize the graph with networkx.

graph = nx.Graph()

Loop through the tfinal DataFrame and get the interaction information. Use the getinteractions function to get each user and interaction involved with each tweet.

for index, tweet in tfinal.iterrows():
user, interactions = getinteractions(tweet)
user_id, user_name = user
tweet_id = tweet["id"]
for interaction in interactions:
int_id, int_name = interaction
graph.add_edge(user_id, int_id, tweet_id=tweet_id)
graph.node[user_id]["name"] = user_name
graph.node[int_id]["name"] = int_name

5. Evaluate the Graph

In the field of social network analysis (SNA), researchers use measurements of nodes and edges to tell what graphs re like. This lets you separate the signal from noise when looking at network graphs.

First, look at the degrees and edges of the graph. The print statements should print out the information about these measurements.

degrees = [val for (node, val) in graph.degree()]
print("The maximum degree of the graph is " + str(np.max(degrees)))
print("The minimum degree of the graph is " + str(np.min(degrees)))
print("There are " + str(graph.number_of_nodes()) + " nodes and " + str(graph.number_of_edges()) + " edges present in the graph")
print("The average degree of the nodes in the graph is " + str(np.mean(degrees)))

Are all the nodes connected?

if nx.is_connected(graph):
print("The graph is connected")
else:
print("The graph is not connected")
print("There are " + str(nx.number_connected_components(graph)) + " connected in the graph.")

Information about the largest subgraph can tell you what sort of tweets represent the majority.

largestsubgraph = max(nx.connected_component_subgraphs(graph), key=len)
print("There are " + str(largestsubgraph.number_of_nodes()) + " nodes and " + str(largestsubgraph.number_of_edges()) + " edges present in the largest component of the graph.")

The clustering coefficient tells you how close together the nodes congregate using the density of the connections surrounding a node. If many nodes are connected in a small area, there will be a high clustering coefficient.

print("The average clustering coefficient is " + str(nx.average_clustering(largestsubgraph)) + " in the largest subgraph")
print("The transitivity of the largest subgraph is " + str(nx.transitivity(largestsubgraph)))
print("The diameter of our graph is " + str(nx.diameter(largestsubgraph)))
print("The average distance between any two nodes is " + str(nx.average_shortest_path_length(largestsubgraph)))

Centrality tells you how many direct, “one step,” connections each node has to other nodes in the network, and there are two ways to measure it. “Betweenness centrality” represents which nodes act as “bridges” between nodes in a network by finding the shortest paths and counting how many times each node falls on one. “Closeness centrality,” instead, scores each node based on the sum of the shortest paths.

graphcentrality = nx.degree_centrality(largestsubgraph)
maxde = max(graphcentrality.items(), key=itemgetter(1))
graphcloseness = nx.closeness_centrality(largestsubgraph)
graphbetweenness = nx.betweenness_centrality(largestsubgraph, normalized=True, endpoints=False)
maxclo = max(graphcloseness.items(), key=itemgetter(1))
maxbet = max(graphbetweenness.items(), key=itemgetter(1))

print("The node with ID " + str(maxde[0]) + " has a degree centrality of " + str(maxde[1]) + " which is the max of the graph.")
print("The node with ID " + str(maxclo[0]) + " has a closeness centrality of " + str(maxclo[1]) + " which is the max of the graph.")
print("The node with ID " + str(maxbet[0]) + " has a betweenness centrality of " + str(maxbet[1]) + " which is the max of the graph.")

6. Plot the Map

Get the edges and store them in lists Xe and Ye in the x- and y-directions.

Xe=[]
Ye=[]
for e in G.edges():
Xe.extend([pos[e[0]][0], pos[e[1]][0], None])
Ye.extend([pos[e[0]][1], pos[e[1]][1], None])

Define the Plotly “trace” for nodes and edges. Plotly uses these traces as a way of storing the graph data right before it’s plotted.

trace_nodes = dict(type="scatter",
                 x=Xn, 
                 y=Yn,
                 mode="markers",
                 marker=dict(size=28, color="rgb(0,240,0)"),
                 text=labels,
                 hoverinfo="text")

trace_edges = dict(type="scatter",                  
                 mode="lines",                  
                 x=Xe,                  
                 y=Ye,                 
                 line=dict(width=1, color="rgb(25,25,25)"),                                         hoverinfo="none")

Plot the graph with the Fruchterman-Reingold layout algorithm. This image shows an example of a graph plotted with this algorithm, designed to provide clear, explicit ways the nodes are connected.

The force-directed Fruchterman-Reingold algorithm to draw nodes in an understandable way.

pos = nx.fruchterman_reingold_layout(G)

Use the axis and layout variables to customize what appears on the graph. Using the showline=False, option, you will hide the axis line, grid, tick labels and title of the graph. Then the fig variable creates the actual figure.

axis = dict(showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
title=""
)


layout = dict(title= "My Graph",
font= dict(family="Balto"),
width=600,
height=600,
autosize=False,
showlegend=False,
xaxis=axis,
yaxis=axis,
margin=dict(
l=40,
r=40,
b=85,
t=100,
pad=0,
),
hovermode="closest",
plot_bgcolor="#EFECEA", # Set background color.
)


fig = dict(data=[trace_edges, trace_nodes], layout=layout)

Annotate with the information you want others to on each node. Use the labels variable to list (with the same length as pos) what should appear as an annotation.

labels = range(len(pos))

def make_annotations(pos, anno_text, font_size=14, font_color="rgb(10,10,10)"):
L=len(pos)
if len(anno_text)!=L:
raise ValueError("The lists pos and text must have the same len")
annotations = []
for k in range(L):
annotations.append(dict(text=anno_text[k],
x=pos[k][0],
y=pos[k][1]+0.075,#this additional value is chosen by trial and error
xref="x1", yref="y1",
font=dict(color= font_color, size=font_size),
showarrow=False)
)
return annotations
fig["layout"].update(annotations=make_annotations(pos, labels))

Finally, plot.

iplot(fig)

An example graph

Make a word cloud in a single line of Python

Moby-Dick, visualized

This is a concise way to make a word cloud using Python. It can teach you basics of coding while creating a nice graphic.

It’s actually four lines of code, but making the word cloud only takes one line, the final one.

import nltk
from wordcloud import WordCloud
nltk.download("stopwords")
WordCloud(background_color="white", max_words=5000, contour_width=3, contour_color="steelblue").generate_from_text(" ".join([r for r in open("mobydick.txt", "r").read().split() if r not in set(nltk.corpus.stopwords.words("english"))])).to_file("wordcloud.png")

Just tell me what to do now!

The first two lines lines specify the required packages you must download with these links: nltk and wordcloud. You may also try these links: nltk and wordcloud to download them. The third line downloads the stop words (common words like “the”, “a” and “in”) that you don’t want in your word cloud.

The fourth line is complicated. Calling the WordCloud() method, you can specify the background color, contour color and other options (found here). generate_from_text() takes a string of words to put in the word cloud.

The " ".join() creates this string of words separated by spaces from a list of words. The for loop in the square brackets[] creates this list of each word from the input file (in this case, mobydick.txt) with the r variable letting you use each word one at a time in the list.

The input file is open(), read() and split() into its words under the condition (using if) they aren’t in nltk.corpus.stopwords.words("english"). Finally, to_file() saves the image as wordcloud.png.

How to use this code

In the code, change "mobydick.txt" to the name of your text file (keep the quotation marks). Save the code in a file makewordcloud.py in the text file’s directory, and use a command line interface (such as Terminal) to navigate to the directory.

Run your script using python makewordcloud.py, and check out your wordcloud.png!

Global Mapping of Critical Minerals

The periodic table below illustrates the global abundance of critical minerals in the Earth’s crust in parts per million (ppm). Hover over each element to view! Lanthanides and actinides are omitted due to lack of available data.

Data is obtained from the USGS handbook “Critical Mineral Resources of the United States— Economic and Environmental Geology and Prospects for Future Supply.” The code used is found here.

Bokeh Plot

Because these minerals tend to concentrate in specific countries like niobium in Brazil or antimony in China and remain central to many areas of society such as national defense or engineering, governments like the US have come forward with listing these minerals as “critical.”

The abundance across different continents is shown in the map above.

You can find gallium, the most abundant of the critical minerals, in place of aluminum and zinc, elements smaller than gallium. Processing bauxite ore or sphalerite ore (from the sediment-hosted, Mississippi Valley-type and volcanogenic massive sulfide) of zinc yield gallium. The US meets its gallium needs through primary, recycled and refined forms of the element.

German and indium have uses in electronics, flat-panel display screens, light-emitting diodes (LEDs) and solar power arrays. China, Belgium, Canada, Japan and South Korea are the main producers of indium while germanium production can be more complicated. In many cases, countries import primary germanium from other ones, such as Canada importing from the US or Finland from the Democratic Republic of the Congo, to recover them.

Rechargable battery cathodes and jet aircraft turbine engines make use of cobalt. While the element is the central atom in vitamin B12, excess and overexposure can cause lung and heart dysfunction and dermatitis.

As one of only three countries that processes beryllium into products, the US doesn’t put much time or money into exploring for new deposits within its own borders because a single producer dominates the domestic berllyium market. Beryllium finds uses in magnetic resonance imaging (MRI) and medical lasers.