Processing RDF nquads with grep


June 27, 2020

I am trying to extract Australian Job Postings from Web Data Commons which extracts structured data from Common Crawl. I previously came up with a SPARQL query to extract the Australian jobs from the domain, country and currency. Unfortunately it’s quite slow, but we can speed it up dramatically by replacing it with a similar script in grep.

With a short grep script we can get twenty thousand Australian Job Postings with metadata from 16 million lines of compressed nquad in 30 seconds on my laptop. This can be run against any Web Data Commons Extract of Job Postings.

zgrep -aE \
'(<https?://[^ >]+/)?(addressCountry|name|salaryCurrency|currency)> "(Australia|AU|AUS|AUD)")|( <https?://[^ >/]+\.au/[^ >]*> \.$)' \
 *.gz | \
 grep -Eo '<https?://[^ >]+> .$' |
 uniq | \
 sed -E 's/<([^ >]+)> .$/\1/' | \
  sort -u > \

The top 10 domains (of 450) from this extract look reasonable; recruitment agencies operating in Australia.

# URLs Domain

Using SPARQL in rdflib

To stream the nquads into rdflib one graph at a time I needed to use a regular expression to get the URL. This is safe to do because in Web Data Commons the last quad is always the URI that the data was obtained from, and URIs can not contain spaces or tabs.

import re
RDF_QUAD_LABEL_RE = re.compile("[ \t]+<([^ \t]*)>[ \t]+.\n$")
def get_quad_label(s):

If we want to check that the domain is .au then we can parse it with urllib.

import urllib
def get_domain(url):
    return urllib.parse.urlparse(url).netloc

We can then filter out the Australian domains in about 4 minutes, giving 17,000 distinct URLs. Because this extract only contains URLs with a JobPosting these will all have structured job ads.

from collections import groupby
import gzip
from tqdm.notebook import tqdm
f = iter(tqdm(, 'rt'), total=16_925_915))
au_urls = []
for url, _ in  groupby(f, get_quad_label):
    if get_domain(url).endswith('au'):

We can extend this with a SPARQL query to search for Australian country or currency.

PREFIX sdo: <>
PREFIX sdo_jp: <>
PREFIX sdo_pl: <>
PREFIX sdo_pa: <>
PREFIX sdo_co: <>
PREFIX sdo_mv: <>
PREFIX sdos_mv: <>

    {[] a sdo:JobPosting ;
         ((sdo:name|sdo_co:name)?) ?country .
         FILTER (isliteral(?country) && 
                               '[ \n\t]*(.*)[ \n\t]*',
                               '\\1')) in ('au', 'australia'))
    {[] a sdo:JobPosting ;
         (sdo:currency|sdo_mv:currency|sdos_mv:currency)) ?currency .
    BIND (replace(str(?currency), '[ \n\t]+', '') as ?curr)
    FILTER (lcase(?curr) = 'aud')}

Then we can build a filter with this on the graph, and add a function to get the domain for the graph to search for .au domains.

def query_au(g):
    result = list(g.query(aus_sparql))[0]
    return result

def graph_domain(g):
  url = graph.identifier.toPython()
  return get_domain(url)

We can then apply this filter but it takes 40 minutes; most of this time is parsing the graph with rdflib.

f = iter(tqdm(, 'rt'), total=16_925_915))
au_urls = []
for graph in parse_nquads(f):
    if graph_domain(graph).endswith('au') or query_au(graph):

Translating to shell

The first script searching for .au domains is essentially a regular expression, and can be translated directly into grep and sed. This takes 30s, so is about 8 times faster than Python.

zgrep -aEo ' <https?://[^/ >]+\.au/[^ >]*> .$' \
      2019-12_json_JobPosting.gz | \
  uniq | \
  sed -E 's/<([^ >]+)> .$/\1/' | \
  sort -u > \

For finding the country or currency we make a simplifying assumption; that if a name/country/currency is “Australia”, “AU”, “AUS” or “AUD” then it is a job ad located in Australia. This is a pretty safe assumption; it’s unlikely that e.g. a company name would by “Australia” or “AUS”. At worst we can filter out false negatives later with more processing. We can then extract the URL with sed as before, and it still takes 30s (rather than 40 minutes using rdflib).

zgrep -E '<https?://[^ >]+/)?(addressCountry|name|salaryCurrency|currency)> "(Australia|AU|AUS|AUD)"'
 2019-12_json_JobPosting.gz | \
 grep -Eo '<https?://[^ >]+> .$' |
 uniq | \
 sed -E 's/<([^ >]+)> .$/\1/' | \
  sort -u > \

Finally we can combine the two expressions to get the process at the start of the article. We can then analyse the output with some shell processing to get the top domains at the start of the article.

sed -E 's|https?://([^ /]+)/.*|\1|'\
    au_urls.txt |\
    sort |\
    uniq -c |\
    sort -nr |\