System-Wide Domain Blacklisting

Feb. 5, 2024 [hardening] [privacy-security] [guides] [libre] [technology]

Many of the same block lists used by ad blocker extensions can also be used globally by your hosts file to redirect all requests to that domain to a non-routable address. You may sometimes see this referred to as “blackholing” DNS. If a program initiates any lookups for, and is on this hosts file, it should be redirected to which instantly fails out as unresolvable preventing the program from connecting to the actual destination.

The concept of blackholing has been popularized among newbie privacy communities by the likes of “pi-hole”. But I think that pi-hole misses the mark. First of all, we don’t want to rely on a system outside of the host, not just because that introduces yet another device which needs to be rigorously secured but also because you may decide to take your computer with you to connect to another network somewhere else. Especially if it is a laptop. Or what of hopping on to a system-wide VPN connection? Additionally, our tor-wrapped DNS solution detailed in Hardened DNS will evade any such device sitting on the LAN attempting to mediate DNS.

While blackholing can simply be accomplished by manually adding a domain list into hosts, I have substantially transformed a script to automate the process and to allow several different lists to be seamlessly combined. Create a cron or anacron job for the script to run:

vi /etc/cron.daily/hosts-block

And populate it with the following:


#Automated script for maintaining a malware blocking hosts file
#Originally created by user SteveRiley
#Adapted and extended to only accept lists over https, add working directory, automatically apply to hosts, add configurable list categories, generalize beyond just ad blocking, add support for lists already pointing to, and prevent overwriting hosts with empty list (such as network issue)

if [ "$(whoami)" != "root" ]; then
    echo "Aborting: Must be run as root or via sudo."
    exit 1

# If this is our first run, save a copy of the system's original hosts file and set to read-only for safety
if [ ! -f /var/local/hosts-blocking/hosts-system ]; then
	echo "Saving copy of system's original hosts file..."
	mkdir /var/local/hosts-blocking
	cp /etc/hosts /var/local/hosts-blocking/hosts-system
	chmod 444 /var/local/hosts-blocking/hosts-system

# Perform work in temporary files

# Configurable blocklist files sources
#Block advertisements
"" \
"" \
"" \
"" \

#Block malware
"" \
"" \

#Block crypto miners
"" \
"" \
"" \
"" \

#Block spam
"" \

#Block trackers
"" \
"" \

#Block clickjackers & bad referers
"" \

#Block Facebook
"" \

#Block Google
#"" \
#"" \

#Block Huawei
"" \

#Block NSA known domains
"" \

#Monotlithic lists to block spyware, ads, scams, spams, shock sites, popups, trackers, etc.
#"" \

# Obtain various hosts files and merge into one
echo "Downloading blocklist files..."
for list in "${block_lists[@]}"; do
	torsocks wget --https-only --no-cookies -nv -O - "$list" >> $temphosts1
	if [ $? == "0" ]; then

#Test if temposts1 is empty
if [ -s "$temphosts1" ]; then	
	# Do some work on the file:
	# 1. Remove MS-DOS carriage returns
	# 2. Replace with to handle lists that already point to
	# 3. Delete all lines that don't begin with
	# 4. Delete any lines containing the word localhost because we'll obtain that from the original hosts file
	# 5. Replace with because then we don't have to wait for the resolver to fail
	# 6. Scrunch extraneous spaces separating address from name into a single tab
	# 7. Delete any comments on lines
	# 8. Clean up leftover trailing blanks
	# Pass all this through sort with the unique flag to remove duplicates and save the result
	echo "Parsing, cleaning, de-duplicating, sorting..."
	sed -e 's/\r//' -e 's/' -e '/^!d' -e '/localhost/d' -e 's/' -e 's/ \+/\t/' -e 's/#.*$//' -e 's/[ \t]*$//' < $temphosts1 | sort -u > $temphosts2

	# Combine system hosts with blocks
	echo Merging with original system hosts...
	echo -e "\n# General malware blocking hosts generated from $successful_lists out of ${#block_lists[@]} lists on "$(date) | cat /var/local/hosts-blocking/hosts-system - $temphosts2 > /var/local/hosts-blocking/hosts-block

	# Apply final blocklist to system hosts file
	cp /var/local/hosts-blocking/hosts-block /etc/hosts

	# Clean up temp files and remind user to copy new file
	echo "Cleaning up..."
	rm $temphosts1 $temphosts2
	echo "Done."
	echo "Manually copy malware blocking hosts file with this command:"
	echo " sudo cp /var/local/hosts-blocking/hosts-block /etc/hosts"
	echo "You can always restore your original hosts file with this command:"
	echo " sudo cp /var/local/hosts-blocking/hosts-system /etc/hosts"
	echo "so don't delete that file! (It's saved read-only for your protection.)"
	exit 0
	# Prevent existing blocklists from being overwritten with empty list
	echo "Aborting: No blocklist content has been retrieved into the working file."
	exit 1

The Configurable blocklist sources section can be adjusted to include lists which have been commented out. Simply remove the leading “#”. You may want to do this if you don’t plan on connecting to any Google services, for example. Also you may find inspiration in adding lists from uBlock, uMatrix or other addons. Just make sure that the list uses IPv4 addresses.

If you want it to be applied immediately instead of waiting for the daily update job to run, just directly run the script with root privileges:

sudo /etc/cron.daily/hosts-block

All of the lists will be updated daily over Tor. You can check the status of your hosts file by running:

grep -e General /etc/hosts

It should reveal whether any lists were skipped which may indicate that a link is broken. For example;

# General malware blocking hosts generated from 16 out of 17
lists on Sat 26 Feb 2022 12:49:16 AM EST

Like with earlier customizations, make sure that Firefox is set to respect your system domain resolution instead of Mozilla’s disgraceful cloudflare honeypot. Now if your adblockers fail for whatever reason, most malicious domains should still be blocked through this defense-in-depth strategy.