Author: Rex

Hostile DNS Management

Hostile DNS Management

One of the most annoying parts of modern technology is the invasive advertising that companies feel we are obliged to suffer. More than this, a significant portion of this advertising can contain malicious payloads that the ad host rarely seems interested in preventing. It’s time to take back the home network by blocking malvertising completely from all devices.

I’m going to say this first…. you can probably do most of this via a PiHole installation, however I’m not sure how many layers that covers. I recommend it for anyone having issues with their ISP or needing to keep their home browsing malvertising free. I’ve covered the topic lightly in a previous post on TikTok, but then I’ve had a few people ask how I’ve deployed it.

Aims & Objectives

We have two hats to wear here: the poacher trying to deploy surveillance, and the home user wanting to rid their network of hostile sites, advertising and create a safety net. I’m going to focus on the latter – if you want to understand how to defeat the latter this is a useful exercise in looking for clues.

You’ll get some fringe benefits too – caching DNS responses centrally means all devices will reduce the respnse times of web pages and apps repeatedly calling specific hosts.

While blocking malvertising I’ve discovered a few false positives – especially in dealing with network scanning apps (Chrome) and devices (iOS particularly) problematic – IDPS has repeatedly blocked iOS devices for attempted port scans and multi-protocol request groups so I’ve reduced the paranoia on the IDPS in very specific scenarios. I remain unsure why these devices need to do this although you should familiarise yourself with WebRTC to ensure protection is at appropriate level to enable feature, and no more.

This approach won’t really work fully with only a single node change… a combination is needed – router-level and DNS level. It didn’t feel right to co-locate DNS and router in our setup here, and I’m not particularly impressed with router firmware (even Asus Merlin) enough to do both. Whilst Asus properly responded to a vulnerability report I submitted I still see too many issues – particularly with packet fragmentation. I honestly don’t understand why router manufacturers insist on using old firmware with ancient Linux kernels, instead of well known distros on modern Linux kernels (arm64 or armel or armhf anyone? Come on! It’s 2022!).

I’ve used Asus as their hardware specs are better – running OpenVPN means you’re CPU-locked on tunnel bandwidth without being able to take advantage of more than one core.

Nevertheless, my overview diagram for blocking malvertising highlights the combination of router redirecting all DNS traffic to our LAN DNS server; the corresponding web requests for forbidden malvertising fruit sent to a web responder (which simply returns an empty pixel GIF); and the router allowing only our DNS server to make whatever requests it wants.

DHCP Server

Whether you’re using the router-based DHCP serving capabilities, or a decent dedicated DHCP service like ISC’s DHCP on Debian, you’ll need to configure the server to become authorative, and provide the IP of your DNS server as the primary and secondary server for all devices.

This is just the first step, as there will be apps and web sites that will ignore this setting (trying to use their own DNS services for example). However this step will ensure that all your devices are given a default setting, covering the majority of the scenarios.

If you’re applying sticky IPs to specific devices also ensure those groups are also given the same DNS server.

DNS Server

First up, we need to deploy a DNS capability to the network which will be used by all network devices.

In our friendly anti-malvertising scenario we can simply create a Debian 11 server (I’ve used the arm64 image and a RPi 4), deploying Unbound as the resolver. It’s fast, recursive and validating with the ability to call and verify DNS-over-TLS upstreams.

Warning: Unfortunately there’s an issue with 1.9.0-2+deb10u2 on Debian 10 if you’re using a chroot-ed config. SEGFAULT and crash on first query. I’ve masked to hold back to release 1.13.1-1 but I’ll be upgrading the server to Debian 11 soon.

If we were deploying a hostile DNS capability to conduct surveillance I would use a different approach likely involving a covertly installed or compromised device – I’m not going to discuss that here for what will hopefully be obvious reasons.

Other than standard DNS definitions there isn’t really a lot to do in basic config, the complexity is in acquiring hostile DNS entries from managed lists. You have a plethora of choices out there and here is one example specifically for unbound. It’s from – frankly whomever operates this deserves a medal.

You’ll need to script a regular download and incorporation into your Unbound config, along with a reload of the Unbound config via unbound-control. Something like:

# use the encrypted usb for temp storage to reduce wear on SD card

# grab a copy of the response cache
unbound-control dump_cache > $tempCache


# now the configuration is updated reload unbound
# (ditches the dns cache and updates with new configation of downloaded rules)
unbound-control reload
# .. if it errors often on reload, use a restart service instead
# systemctl restart unbound

# restore the pre-update response cache - the conf will override the cache if a domain becomes hostile
cat $tempCache | unbound-control load_cache

# scrub the cache file
rm $tempCache

For each source of malvertising domains you want to trust – and that is your choice – simply insert that pull request into the above algorithm.

You’ll also need to deploy either your choice of web server to serve a 1px by 1px transparent GIF file for every request… or use the delightful pixelserv. I haven’t had a chance to check it out yet, but I’m aware of a Diversion project called pixelserv-tls. Sounds promising in one sense however the responding server still won’t match the requesting domain so most clients will detect a certificate mismatch regardless. Really isn’t an issue in most instances.

In practise putting a friendly single web page up which is served instead of an advert or hostile site can appease users frustrations about the who-what-why… maybe they can raise it with you and you can have a positive conversation about why you blocked it in the first place.

Router Netfilter Approach

After we have an operational DNS capability on the target network, we need to enforce the use of it by silently pushing all DNS traffic towards our controlled / safe zone. If we were the poacher trying to deploy surveillance we’d also want to redirect without having to compromise all client devices on the network.

We’d collate a list of known hostile DNS providers and platforms who have their own DNS capabilities e.g. Faceflaps, Google, Comodo and Discord (there are many more). Special focus on CloudFlare as they’ve partnered with Faceflaps. The DNS-over-TLS ITEF working group was called “DPRIVE” for a reason…

I’ve created a script that does a DNS lookup (preferably use Idns drill, but Bind’s dig is fine) for each, collects the associated nameservers for each hostile domain and then adds them to a list. Each one can then be resolved to an IP address.

You’ll want to add common DNS servers like Cloudflare and Google into this list so that anyone using drill on your network gets the response you want, for example:

In the screenie above you can see an attempt to use an alternative DNS to get a true IP resolution for…. thwarted by our netfilter rules and being provided our LAN DNS IP instead.

You’ll need to take that final list of IP’s and loop through applying some pre- and post-routing rules on the router, which also apply a masquerade to the responses (to make the client think the responses came back from the server they requested it from). You can largely assume that DNS traffic is sent on 53, 5353, 853, 443 and 8443 but you may want to run some package capture to check against each hostile source.

So, for the generic ports you’d have something like the following in your router firewall startup script:

targetLanDNS=<your shiny new LAN DNS IP>
dnsPorts=<list of dns ports you want to focus on>

for port in $dnsPorts; do

# basic re-route on the request
iptables -t nat -A PREROUTING ! -s $targetLanDns -p tcp --dport $port -j DNAT --to $targetLanDns:$port

# masquerade the response
iptables -t nat -I POSTROUTING ! -s $targetLanDns -p tcp --dport $port -d $targetLanDns -j MASQUERADE


And then follow that up with similar rules that focus on known hostile DNS & DoT servers (irrelevant of the ports they may try and use) like so:

# where $dnsAddress is the item in your hostile DNS IP list
iptables -t nat -A PREROUTING ! -s $targetLanDns -p tcp --dport $port -d $dnsAddress -j DNAT --to $targetLanDns:853

# .... and ....

iptables -t nat -I POSTROUTING ! -s $targetLanDns -p tcp --dport $port -d $targetLanDns -j MASQUERADE

Don’t forget: The DoT requests will likely fail at the client end due to a mismatch on the server certificate. Your aim here isn’t to dupe them but force them to fallback to plain DNS.

You’ll that these rules add an exception into these scripts so that the router allows your designated safe LAN DNS to make it’s forwarding requests.

You’ll have the usual challenges where the router firmware / OS is likely running an out-of-date kernel, plus it may require you to take additional steps to enable a package manager then finally allow you to install packages like idns and so forth to get at usable commands.

Asus Merlin is pretty good, DD-WRT is was ok but it looks like OpenWRT is taking the lead again. All three allow use of package managers (use a USB stick and JFFS).

If I were an attacked I might also try and compromise the router so ensure you schedule reboots regularly, and pipe all router logs to a centralised SIEM capability. Asus allows IP and port specification for rsyslog targets … defect fixed after I something I reported 😉

Additional Considerations

If we wanted to prevent our ISP trying to do almost exactly the same thing we’re doing (a number of US-based residential ISPs do this, one or two UK ISPs have also tried it); we’d put a secure tunnel into place to a ‘safer’ country. We could then direct all DoT traffic to a safe zone away from our hostile ISP.

This way the ISP has no connection records or meta-data for the queries we’re sending – which they’d be able to acquire even if they can’t see the content of the queries protected by DoT. Ideally we could funnel our web traffic too, so that they would be unable to collect connection records for that type of traffic.

WAN Level

There’s levels of complexity in this approach I found an interesting intellectual challenge, rather than a security necessity. You will have to choose your own level of involvement based on what you can commit to maintenance of your chosen solution.

Dingo Engine Evolution

Dingo Engine Evolution

Has it really been two years since the first post? Wow. Ok. It’s been an interesting couple of years! Since the initial tranche of posts RD has changed course a little – I quickly realised that rather than just simply a decision engine, I could leverage all the useful information about decisions and put spam to good use for a change.

We’ve seen all the massive benefits of legislation such as GDPR (DPA 2018 for us in the UK), and some of the side effects where organisations have probably overreacted significantly e.g. WHOIS data. I’ve seen most groups err on the side of caution by cloaking all WHOIS data in case they miss something that could be classified as PII.

Even where the registrant is an organisation – data protection regulations apply to data subjects, not companies and a data subjects records as a director are publicly available in the registers of companies worldwide (some free, some behind a paywall). My personal view, for what it’s worth, is that if you’re a director and you’re already on the public record as such, the WHOIS entry need only contain the corporate registration detail.

Now there’s ways round this if you have a legitimate query – I’ve had positive outcomes from conversations with every registrar or domain host I’ve needed to speak to. Of course, each has required proof of offences (duly provided), and verified I am who I say I am. One organisation considered asking me to apply for a Norwich Pharmacol order – which I completely understand given their predicament.

WHOIS made it immensely easy to track spammers and their behaviours, but it’s by far the only marker. I suspect organisations who sell anti-spam & security products have likely faced similar dilemmas and evolved to remove reliance on such markers.

And that is exactly where Ringo Dingo has gone.

Rule Types

In very high level terms, emails contain a plethora of markers which allow us to route deterministically. Assessments can be made of active flags – sender emails, dodgy sender domains etc; passive flags – derived from other secondary layers of information beyond just emails; geo-blacklisting and of course proprietary factor analysis based on other indicators.

Each rule type is now configurable separately and the decision engine will allow use of regular expressions (which means header-specific rules).

Decision Engine Performance

Fairly early on it became apparent that caching base factors for re-runs e.g. domain meta-data, IP variations, etc would mean massive reductions of analysis times in many cases. Some early code was … well… rubbish experimental code frankly and needed refactoring.

Having moved away from MySQL and it’s family, I’ve elected for PostgreSQL for relational and Mongo for non-relational data stores. Both are performing well but I do miss some aspects of the MSSQL access model – and full stored procedure capabilities.

Over the last two years the average scan time for emails has gone from about 3.1s to roughly 0.75s. Obviously the decision engine portion of that is small but the enrichments take varying amounts of time.

Change In Architecture

In the background there are now two types of decision engine housing – one based on a Postfix milter which provides rule-based decisions, and another designed as a configurable Thunderbird extension. The idea is that your choice of email provider is agnostic and you can opt for whatever level of response is deemed appropriate:

  • Had an email address compromised and only ever get spam on it? Hey, let’s just reject all emails to that email address via an active capture rule
  • Maybe we note a string of similar sender factors that vary only by subdomain or other similar string – regex-based capture rules now deal with those reducing
  • Have a sender who is a persistent offender? Probably easier to apply an active rule, or use passive rules if they try and use multiple sender domains or addresses
  • Perhaps we’re a bit tired of Nigerian princes and UN Beneficiary funds from Vietnam. Probably easier just to block all emails relating to those locations

There are many more scenarios, and the detection sophistication has increased with smarter spammers trying to hide the obvious markers. It’s been quite an interesting challenge and to suit this the focus has moved from Gnome Evolution on the client side to Mozilla Thunderbird to make integration development easier (allegedly).

The (assumed) hosted email solution feels like it works in every model and putting an MTA to act as an initial filter is definitely an option. I’ve gone through two different email providers in the last year, eventually settling on one which provides expression-based, domain-based and indicator (SPF, DMARC) based blocking rules. More importantly the latest provider allows org-level choices about whether to quarantine or simply reject email entirely based on your rules.

In this scenario I didn’t feel the RD MTA was needed – plus that’s one less component to maintain.

High Level View

It’s a pretty simple layout, and all spam-detection events are carried through to a dedicated topic for later use in analysis. You don’t need the whole emails for this and attachments aren’t necessary at all but security is key here. Later iterations will use the event stream for real-time analysis, but I don’t have the time or the driver to complete that just yet (other projects are now taking priority).

Given the ready availability of ML in all three of the big cloud vendors, it won’t be too difficult to provision ML-ops to do the job. Defining the logic for the learning models… well that’s a lot more difficult!

For now though, simple grouping of events by recipient, date and sender email will show the patterns of distribution of data. I can easily discern who sold which data-set to whom, and roughly when they started using it.

Dingo’s Future

A plateau has now been established with the core rating operating as a systemd service, hosting a plain API, callable from any client – Thunderbird extension, Postfix or another plugin type. The caching tier is currently being moved from dev into cloud ops, and this very blog will likely follow suit. Having effectively ditched Ionos to go back to a combination of Azure and AWS should make this a lot more manageable (and cheaper too).

It’s fully operational and has been cataloguing events for some time now, so I suspect I’ll let it carry on in the background for a whole whilst getting some of my other projects back into shape. Next steps? Well that’s absolutely shiny toy territory…. automated generation of new detection rule sets, based on real-time analysis of potentially undetected spam events. The decision engine will be allowed to operate from automatically generated rule sets.

It’s been very satisfying to see the Dingo decision engine quietly push all the Trumptard spam, phishing scam and data trader-initiated emails into the deleted folder without needing to check with me – the only reason I knew they were there was because I just couldn’t resist checking up on the results!

Thoughts on TikTok

Thoughts on TikTok

Updated: The current attention to TikTok appears to be largely politically motivated from the Trump administration, so please fact-check all assessments on this topic.

TikTok’s sister app – Douyin – is only available within The Great Firewall of China but seems to retain a number of similarities (unconfirmed directly). However one of the key issues are such things as deep fakes propagated on the platform, prior to the evidence collected in an analysis done on the apps traffic and reverse-engineered codebase.

Love it or hate it you cannot deny that the platforms meteoric success generated massive popularity of the mobile app. Content on the app emerged from it’s lip sync-ing origins into staged comedy and more, gaining more and more popularity.

Extrapolations from the codebase are more difficult due to the obfuscation used, so some of the guesses in this area are trickier to confirm. However those inferences are backed up by behavioural analysis done on the calls made by the app in sandbox environments by Talal Bakry and Tommy Mysk.

Firstly suspicion is raised because the app checks the clipboard frequently – bear in mind that this is not a word processor or IM platform so there are very few reasons why this action could be justified.

Whilst unconfirmed there is some anecdotal evidence of concern relating to a U.S. lawsuit filed in California. The claimant in lawsuit states that TikTok created a user profile without her permission and without any action from her, alleging that the firm sent all sorts of PII back to China. Whilst this case is ongoing and there is no preliminary finding and due to the fact that TikTok has removed content offensive to the Chinese government, it appears that the platform has the capability to lock out devices belonging to those posting content it feels inappropriate.

In the case of Feroza Aziz there is a debate to be had on whether a string of previous content was appropriate – there’s too little information to make a judgement. However on balance it does appear that TikTok moderation is far more heavy-handed than US platforms such as Facebook.

That being said, we could also theorise that the current global political and economic climate – combined of course with the anti-China rhetoric from the U.S. administration – is the largest driver of the efforts to find problems with the platform.

That being said, I’ve built a mechanism to block TikTok from your network based on Debian Linux and unbound (combined with an appropriate configurations for your wireless and edge routers). The script could easily be modified for PiHole-based DNS (FTLDNS), although I suspect PiHole may add TikTok-based blocks in the near future.

You can read about that blocking mechanism here.

Blocking TikTok At Domain-Level

Blocking TikTok At Domain-Level

Firstly let me say that this is a largely personal choice, but I’ve come to enjoy malvertising-free home & office network for some years now. I’ve not seen any adverts in years and have established multiple layers of blocking.

Whilst this is partly due to the nuisance of demands to buy products we don’t need for problems we don’t have to solve, it’s also partly an architectural and technical challenge to solve.

Approach Options

I could have approached this at a network-level block, which would have been simple if the platform was purely Chinese. However they have offices and infrastructure in the U.S. and Singapore making this more complicated.

In addition to this I can see they’re using Akamai for edge and CDN which means that I would potentially be blocking traffic for non-TikTok mechanisms.

The next-best option is to target TikTok domains and block them via DNS filtration. This isn’t perfect because mobile and console apps are beginning to adopt DNS-over-TLS libraries to use their own platform-specific DNS capability (including serving adds via CNAME-ed sub-domains).

There are a number of ways to stop that involving mangling some firewalls and analysing traffic to regularly update your hostile DoT server list. However that’s not part of this particular post – maybe I’ll have time to explain the implementation in a post later this year (but don’t hold your breath ok?).

I roughly equivocate this approach to an obstinate app or service refusing to move out of your way; so you remove the floorboards from underneath their feet…They can still stand but not anywhere near your flooring.


So the LAN DNS servers here operate within configurations that span:

  • Standard DNS resolution & caching services
  • Filtration to redirect hostile DNS back to the LAN DNS servers
  • LAN, VPC and WAN domain name entries for internal kit
  • Malvertising fencing

The solution for TikTok fits into this last category and is pretty simple. A maintained GitHub repository has a pretty good list of TikTok and related domains. I’ve created a very quick script which pulls that list and transforms it into an Unbound-friendly configuration.

The idea is that this script is run on a crontab every few days to get the latest list and has been running for a few days without incident already.

I’ve created the code snippet in full on GitLab, which you’re welcome to use and abuse for your own purposes. Enjoy 🙂

Attack Analysis – June 2020

Attack Analysis – June 2020

We already have plenty to be concerned about, and it appears that COVID-19 has not slowed down the amount of nefarious internet traffic generally. Of course port scans, a chancer with bots trying mass port connection attempts, RATs and bots attempting to find other infected nodes and so forth are just BAU.


After looking at this weeks security logs across different points of infrastructure in three different countries, there is only a small difference in the types of illicit traffic detected. I should also point out that all these attempts were foiled at the outer layer. Before we go further a note: the following statistics are based on attempted intrusions, illicit connection attempts and post scans detected at edge-layer security equipment or bespoke listening posts.

What follows is the diagnostic analysis taken from log data across multiple types of infrastructure. Generally speaking though there has not been any dip or spike in illicit connection attempts since January 2020.

Interceptions based on UK security logs

If we looked no further than base logs from the UK one might think we’re under direct attack from the U.S. and China. But when we look far closer we see that the majority of U.S.-based source IPs are from hosting e.g. GoDaddy, Digital Ocean and Google Cloud (with many smaller hosting providers also in the list). China is more difficult to diagnose as most IPs sit within China Telecom ASNs, but there are a number which are known VPN endpoints.

China accounts for 16% of detection in the previous week, whereas Russia is the next most prominent at 5%. Also noteworthy are Seychelles with 2% and closed out by the Republic of Moldova with only a 0.3% share of the intercepted traffic.

Attacks with source IPs grouped by country – Five or more detections in the last week

We can see that Chinese IPs number only slightly more than the RoW – of 82 countries the top 25 are included here, so that leaves 67 countries balancing that out.

Are We Constantly At War Then?

Are we really constantly under attack by the US, China and Russia? Well no, not really.

A lot of the IPs sampled at random appear to be hosted cloud servers which anyone can rent, VPN endpoints and proxies. I would estimate that roughly 60% of traffic I analysed sits within these categories so we could potentially either classify these as false flag operations (if a state actor was actually involved) or black hats covering their tracks.

The rest is probably probes and attacks by state actors, bots looking for nodes or C&C servers, and platforms like Shodan doing their usual enumerations.

It’s very difficult to say with great certainty unless the attacker has accidentally slipped up. But then you have to analyse whether that “slip-up” is actually a misdirection and therefore not a real slip-up…

China is very tricky to analyse fully with the tools available to me, but past experience working for EU companies reminds me that a lot of firms have VPN servers sitting on old desktops in the back room of shops in cities there. This is the simplest and easiest way to see how well they’re evading the Great Firewall of China. If companies can do this on a regular basis black hats and state actors have greater incentive or financing to do this often. Or better.

Another notable source is the Seychelles – it’s not a hackers paradise island but rather the nominal home of IPVolume Inc., which is notorious for selling VPN, proxy and bandwidth to black hats for nefarious purposes. Most providers block the IPVolume Inc. IP ranges as standard because there’s next to zero legitimate traffic originating from the network. It’s not natural traffic from the richest country in the African continent.

Pan-European Story

So some parts of the security infrastructure sit in global locations, and in two of these locations we can see similar statistics. Our German servers appear to be the subject of far less interest – roughly 30% of the illicit traffic involved. The Spanish data centres are only slightly lower numbers than the UK-based stats but have different distributions:

Attack Detections Grouped By Source IP Country

In Spain Taiwan drops a few places and Vietnam is the most popular after the usual three (U.S., China, Russia). In Germany the Seychelles was the 4th most interested attacker, and saw Latvia and the Republic of Korea finish out the top 10 – also seeing Russia knocking China into 3rd.

The types of attack and distribution of activity was largely the same by country of origin IP. So either the same groups are using hosting / VPNs in those countries or… well, you get the idea.

Of source European countries the Netherlands is top at 3.8% of the total intercepts, followed by France with 2.6%, the UK with 2% and both Germany and Greece with 1.3%. However the Netherlands is a favourite spot for multinational hosting giants to set up data centres, France OVH and the UK FastNet. Immediately the options for hosted VPN endpoints stood up in a hurry to use as a basis to launch scans and attacks open up. Netherlands is also one of the most frequently used European VPN server locations, which mostly explains their share of the detected traffic.

If we look at Australia, every single one of the source IPs is in the OVH ASN for Oz, and only numbers 0.3% of total attempts in the last week.

Types of Attack

The lions share of the illicit connection attempts (29%) were on TCP port 1433 – usually the default port for Microsoft SQL Server. I would hope that SQLS instances are not being left open to the wider internet directly but there may be some port nat-ing perhaps? Even so usually there should be a web server or so in between the internet and your database servers. A number of known RATs also use this port to listen on, so more likely these connection attempts are chancers seeing if a target IP has been infected.

The same goes for the next most popular ports, TCP 3389 and 5555 with 8% each. Although these are also default ports for SoftEther VPN, Microsoft WBT Server, Shoretel VoiP diagnostics, various smart TV vulnerabilities (I would assume where a home-owner has mistakenly enabled UPnP on the router?), and MS Dynamics CRM default web ports.

The usual suspects are also attempted on TCP 555 (RATs galore), TCPs 8000 / 8888 as an alt http port & internet radio streaming port and 9001. Suspect the latter is looking for improperly configured Cisco routers listening on their default config channel (surely not on WAN?). Then only UDP 1900 saw any great amount of traffic in targeted connection attempts – SSDP and UPnP discovery ports ahoy.


All in all 251 port variations (including different ranges) across 1320 detected attempted infiltrations on UK infrastructure. If we look at the RingoDingo listening posts in Spain and Germany the results vary only slightly.

Overall though the detection numbers are tiny. They’re a slim fraction of a % of what I’d normally see in private sector business – but that’s largely because the infrastructure hosts servers which are not widely advertised and for customers whose relationships with us are not publicised. I won’t go into detail but they’re nowhere near as juicy a target as nuclear power stations, banks, gambling and gaming firms, architectural and design departments, automotive and retail sites.

But that doesn’t mean we should just relax – we don’t want to become a platform from which groups could then attack those juicy targets!



So whilst there’s been substantial progress on RD across all tiers (currently doing data architecture and working on PostgreSQL), a problem that keeps cropping up across all enterprises…cropped up again. For me this is a smaller piece of design and development work, which has benefits for a wide range of user groups – and is perfect for the open source model.

Almost feels like a distraction that I’d meant to do something about a few years ago.

Often where organisations want to take the next stage in maturity of their architecture practise, they need look at how they manage their overall enterprise continuum. The starting point is a large volume of flat Visio drawings, a tome of Word documents and probably a whole chunk of Powerpoint presentations. I’d say Writer and Draw but am unsure of how many people would get the reference 🙂

If you’re fortunate enough you may be operating a “living document” approach in platforms like Confluence, but even with some of the diagrammatic markdown there inevitably most of the drawings will still be in Visio. Living document platforms allow you to ditch disperate single document files in order to design & deliver change wiki-style. With the linkage to JIRA it really comes alive of course.

Even really powerful Visio drawings are just drawings. It’s not like there’s a data dictionary or much meta-data behind the shapes – unlike DWG, iServer store or Sparx EA repositories.

What do you have against Visio then?

Nothing at all. It’s a great tool – I remember working for a small architecture & surveying practise in the late 90’s, building CAD tools for them to use in AutoCAD and Microstation. AutoDesk were trying to get their developer network (the ADN) interested in developing on their flat drawing product, Atrix Technical. Intended to be a template & stencil-based CAD tool (sound familiar?), focusing on simplicity vs. AutoCAD-level drawing dictionaries, it was struggling to gain traction with the ADN.

It looked like the biggest problem was that, in an industry already using AutoCAD it was difficult to sell the benefits of a simplified tool. Surveyors just weren’t that interested. At the company I worked for at the time we hit upon the idea of using it for estate agents – when they’re assessing a property it might be a good way of quickly drawing a layout. We import a survey DWG into Atrix as the home layout, then give the estate agents a bunch of to-scale stencils with household items e.g. furniture and appliances.

Sounded good enough that we approached AutoDesk to see if we could get some help with funding for the project – they even came to Nottingham to see us.

Read More Read More

Tracking Pixel

Tracking Pixel

Vision and Concept

During the course of early development I realised there was a use-case I’d missed. Fairly simple use-case with a capability which has a limited effectiveness against spammers with good opsec.

I use paid-for tracking services for court document service, notice-before-actions, some subject access requests and when sending enquiry emails to spammers. This latter use-case is useful for finding footholds for intel packages on spammers trying to stay hidden. Depending on what desktop system they’re using or how sophisticated their webmail / mobile mail client is, I can get a verification the email was received – and even sometimes when it’s read. Very occasionally I’ll also get meta-data based on a document being opened.

But because I don’t send a lot of these emails my company is paying for a service which it doesn’t use too often, and are paying for data and web servers anyway.

Why not kill two birds with one stone and deliver our own document and email tracking service? During the design of this service we’re sure to learn something useful in our wider vision for Ringo Dingo, plus we can open-source it once it’s mostly engineered.


When I send an email I want to know it’s delivered. If I don’t get a bounce-back with error I know it’s reached the recipient – whether they open and read it is their problem as delivery is complete.

Once it’s opened I want to see if I can trigger a request back to one of our servers – via domain aliases if needs be – to call a web service. This service would gobble up whatever information was available to it, although I’m going to assume that the mail client may:

  1. Allow script execution
  2. Prevent script execution
  3. Block external web requests

In the first scenario I’ll assume that a standard JavaScript call will be made within the email window. I can write some obfuscated JS which isn’t readily detectable by anti-virus which collates as much information via DOM as it can, and which then makes a RESTful call to our web service.

In the second scenario this won’t work, but perhaps I can use an IMG element or similar to download something from the web service. In this case I’ll get the web service to respond with a transparent 1×1 pixel gif, and use as much information as I can grab from the request and it’s headers.

In both these scenarios it may be useful to also duplicate each of these calls so that if they are available within the mail client, a domain-based calls is made as well as an IP-based service call. This would highlight whether our domain has been added to any ad-blocking DNS filters. As we’re not using this to spam people or for any nefarious purposes I’m ok with this being an edge-case.

The third scenario is much more problematic. Here we’re not going to be able to get any direct tracking information without either doing something naughty or having the subject actively interact with something in the email. When we look at sending documents I think using links to these documents is the way forward here, as we can secure documents in a stored area on a server and ensure only recipients with the email links can access them.


The information sits in three categories here:

  • Information about the recipients systems
  • Documents we want to make available to the recipients
  • Authentication tokens allocated on a per-recipient-document combination

In both cases we need to ensure that meta-data stored in a database is stored on an encrypted volume – will need to look into MSSQL-style database encryption on PostresSQL too.

In order to generate a new authentication token for a document, I’ll need to ensure that that request itself is authenticated and authorised. Because this is likely to be via a web service I want to protect the transport of that request; protect the username-password in the payload of the request; and ensure that the verification keys on the database server are hashed and stored on encrypted medium.

I’ll also need to ensure that the access to this security silo is segregated from the rest of the meta-data in case there is a breach; and only accessible by the API via stored proc.

Design and Engineering

Ok, so we have our vision and principals defined, let’s get busy. Gitlab repo created but we can use this piece of work to check & test some principals for engineering we’re going to re-use in Ringo Dingo overall.

I want to take the ideas I’ve considered in CI previously and apply them here. Lessons learned will be invaluable in the other repositories that will make up the Ringo Dingo platform.

The first work to be done is documenting the design – I’d already added a lot of this but I think I’m now going to rework this from into Wiki proper.

Repo: (may still be private until first release is out of testing)

CI Design

CI Design

Let’s not derail during engineering stages

One of the key areas that needs focus at all times is quality. Without good QA the defect rate rises due to uncertainty inferred from changes to the systems capability. It’s not just ensuring the solution builds – although that might be the first test – but performing analysis for optimisation and security.

Lot’s of benefit to be gained from a little effort up front. I’m a big fan of QA and DevOps as this will create a level playing field for anyone that wants to contribute.

Evolution is the process by which testing new capabilities either proves successful, or kills it. Is this capability operable and does it have value? And I don’t mean “…well it might have value if some company does ‘X’ in ten years…”.

Static Code Analysis

You can’t get all the problems via static code analysis, but you can potentially identify traits which may give clues about flaws in your code when it’s executing. I’m going to come back to the security analysis and focus on the coding standards analysis, expecting that two tool-sets will be needed which are focused on each particular job.

To get both in one and have both capabilities working well in one tool would be a win-win.

However Ringo Dingo throws another curve-ball in that I know that .NET Core, Python, HTML, ISO/IEC 9075:2003 (SQL:2003), SQL-99 and Javascript are all involved in the platform.

Code Quality Analysis

I’ve typically lent towards something like Fortify (which covers security and code-quality) or Checkmarx in enterprise-grade scenarios, but I think this calls for use of open-source approaches. VS Code already has substantial live code quality analysis tools with the C# plugins, so I don’t want to do too much in this category.

SonarSource Community.NETI’ve listed SonarLint in the
security analysis section below,
but it could be that the same
tool-set covers quality too
nDepend.NETLike the technical debt estimation
aspects in here… if money was
going to be forked out this would
be near the top of the list
TBC.NETJust remembered I’m going to
need something like dotTrace for
performance analysis and
GitLabMultiOk so I searched for Fortify-esque
tools and found a GitLab page
describing how it’s Auto DevOps
covers this already.
Query Plan AnalysisSQLNot so much a tool but use of
RDBMS platform query &
analysis. I’m not keen on building
a toolset for GitLab CI from
scratch, as I did for MS SQL in
past years.
There’s some notation in the
SonarLint documentation, but
that’s IDE-side.

What’s clear here is that whilst I’ll need to look for a dotTrace equivalent for VS Code and Linux, the default CI can be configured in GitLab to do a large portion of the work.

Ok so I need to figure out a couple of things, but this looks very promising

The other core part of CI is running the unit tests successfully – something that should be done before code is pushed back into the working branch, but often skipped or forgotten.

For .NET it’s fairly simple right up until API call tests (perhaps that should fall into the integration testing rather than automated unit testing), and I need to look into Python and Javascript unit testing.

Security Analysis

I used the OWASP list of code analysis tools as a basis, focusing on .NET. Really this tool needs to be runnable as part of a solution build in VS Code, and somehow runnable as a build task job on the CI runner. That way contributors developing code could ignore security warnings / add their local suppression configuration, without affecting the end result – as GL CI will enforce the full rule set.

Of course we should also expect that code analysis tools are specific to language, and that we will need to configure the GitLab runners for each code base update. I’ve selected the following initial candidates for this category on a per language basis:

Bandit PythonLooks like it’s maintained
currently, and could give good
coverage for issues in Python.

Needs a road test
Google Search DiggityN/AThink this may form part of
the post deployment scrutiny
rather than a codebase check.
It’s focused on assessing a
web application whilst running
Security Code Scan.NETThis looks like a good tool to
road-test for use within the
environment. Not sure if this
will work on VS Code on
Linux yet, though it has
NuGet package installation
LGTMMultiCould be the CI tool at the
GitLab runner end. Needs
Puma Scan.NETThe server-side flavour is
commercial so the end-user
(community) edition needs
careful comparison. Do I want
to use a commercially-funded
product which may later close
off, but be audited and funded
properly… or an open-source
(SCS) alternative which is
community dependant?
SonarLint.NETLike the look of this –
IDE-embedded live code analysis
tool. Works with VS Code OotB.
Maybe this for IDE and either SCS
or Puma Scan server-side?
It’s part of the overall SonarQube
platform which was listed in the
previous section

I’ll also take a look at OWASP O2 for the overall Ringo Dingo piece.

Other Thoughts

For now I’m going to say it’s “OK” to add overrides for warnings on exceptions relating to coding standards such as curly-braces-on-newline, and other such stuff. I’m a K&R-style fan and don’t care much for that – I’m likely to make that a standard for the repository submissions as well. I’m also not interested in debating that aspect.

I’m initially going to add suppression for IDE0063, so that standard using statements are not producing warnings. Whilst it’s ‘easier’ to use the newer notation I also see many defects being created where someone forgets the scope of something (or mixes scope at the same indentation level unintentionally).

Conversely it’s not ok to add a suppression just to get a section of code working. Coding style is one thing but likely that particular warning or error was there for a reason.

The problem area is going to be code analysis for PostgreSQL. Suspect there will be custom grep scripts for GitLab CI which search for particular problem syntax, and the need to try and create unit testing for procs to ensure they query within performance limits. No small task.

If I build that capability, I’m definitely going to open-source it as the community will be able to use the framework to do a far better job than I can at this stage.

It’s also worth periodically reviewing why warnings or exceptions might be explicitly (configured) overridden. Something you might do in an early phase to get the component working might not be appropriate later, especially approaching production.

Definitely need to review all suppression and exemptions before pre-prod deployment, and will have to think about what mechanisms will help achieve that – perhaps something linked between development branches and sitting in-between dev and master branches? We’ll see – normally current version is master, with development branches for each feature group in development.

So much to consider.

MariaDB & MySQL

MariaDB & MySQL

The transport layer security dance. Like it?

I was pretty disappointed with both these RDBMS, having been catching up on the latest recently. For almost the last fifteen years I’ve been involved with high-transaction rate, low-latency platforms and solutions – almost entirely focusing on MS SQLS, Oracle and PostgreSQL in the relational world. Other non-relational solutions provide in other areas of capability, but MySQL isn’t really capable of the “big stuff” on it’s own (without some serious middleware a.k.a. Google and Wikipedia) but it’s been a favourite of mine in the past where SME-level solutions have needed open-source approaches.

You may remember also that Big Red scooped up Sun in 2009, catching MySQL in the same net – much to the consternation of users of MySQL. Oracle’s reputation isn’t good in the vendor space, nor has it got a great track record on the technical / developer support.

In fact Oracle’s concern at an open-source contender such as MySQL gaining so much traction in Oracle’s own enterprise RDMBS space, it tried pulling the rug from underneath MySQL’s feet a few times – most notably by buying the BerkleyDB engine a few years prior to the Sun acquisition. Shortly after the sale took place MySQL dropped the engine as it was realised it was barely used anyway.

New feature requests and ideas being mulled over on the MySQL dev side would also potentially mean that MySQL could potentially go toe-to-toe with Oracle RDBMS in most scenarios.

Oracle weren’t going to give up so easily and the development community were acutely aware of this.

So much so that Michael Widenius led the “Save MySQL” campaign, petitioning the European Commission to stop the merger. However that effort was largely unsuccessful so they forked MySQL the day the sale was announced, created the MariaDB moniker and wandered out of the Sun enclosure. A number of Sun’s MySQL developers followed him such was the concern at Oracle’s actions.

That tale being told, I’m still bewildered as to why repository builds of both MySQL Community and MariaDB are build with YaSSL – an ageing TLS library from WolfSSL. WolfSSL don’t even link to the product directly on their website anymore, nor include it in their product comparison chart.

OpenSSL, WolfSSL or BoringSSL are just three transport protection protocol libraries available, it seemed counter intuitive to compile with a library that doesn’t even support TLS1.2. From what I can see it doesn’t support EC ciphers at all either. Strange that we as a community would want to continue using it.

But with some digging I found Debian bug 787118 from 2014. Digging further into that it became clear that this was a legal issue:

“OpenSSL cannot be linked to from GPL code without the authors providing an exception to clause 6 of the GPL, since the OpenSSL license imposes restrictions incompatible with the GPL.”

I just learned something new. There’s a bit of back-and-forth on that same Debian bug and an inference that the YaSSL reference will be replaced by CYaSSL. Sounds better?

Nope. The first remark visible on the CYaSSL github repo is to warn people to use WolfSSL instead. In fact it looks like CYaSSL hasn’t been updated since 2015 / 3.3.2 and the patch to it was never implemented.

Default MDB Install SSL status

So no OpenSSL or BoringSSL, and no sign of MariaDB maintainers looking to compile with WolfSSL, instead of obsolete libraries? It’s also an embedded TLS library which supports TLS 1.3 and EC ciphers. I know there’s a lot of talk about how clunky & unwieldy the OpenSSL code base has become, so can see mileage in opting for something more efficient.

Frustratingly, I can see that there has been work to this end on the MariaDB JIRA, which appears linked to 10.4. Looks like the work is done in upstream versions but Debian unstable hasn’t seen a whiff of this yet, with mariadb-10.3 still hogging the limelight. Unfortunately this highlights one of the few disadvantages with widely-used FOSS products, in that time is required to properly evaluate and then commit to replacing something as important as security libraries.

AWS enforces certificate-based authentication which is really useful in this context, and AWS do offer MariaDB RDS instances so I suspect the underlying OS is not Debian Stable. However this shows that it is possible and operational. For now I just I don’t see the value in setting up stunnel to workaround the problem as this essentially means the mechanism used for development won’t replicate the mechanism on prod.

The problem with not developing against a standard security design is that we should never develop anything that is instantly & fundamentally different outside of the dev environment.

I’m not comfortable using non-EC transport-layer security for something as serious as security data & meta-data, especially in data centre scenarios. Think I’ll let MariaDB get this under control before exploring any further, but I fully intend to write both PostgreSQL and MariaDB-compatible data layers for Ringo Dingo eventually.

For now though, I’ll continue to focus the work on PostgreSQL & OpenSSL (for it’s sins).

And so it begins…

And so it begins…

The journey from product roadmap to backlog prioritisation is always challenging…

I’ve already written a good portion of the middle-ware for the product post-pilot, but am now realising it’s a pretty big job for one person. Building servers and selecting appropriate EU IaaS providers, adhering to legislation and keeping technically relevant are only small parts of the roadmap.

In 2016 there was a lot of spam going about. At one point I was getting thousands of emails a week of which approximately half were to addresses I’d used explicitly to send SARs (subject access requests).

It wasn’t just appalling re-use of data, it was wanton flaunting of data protection and privacy laws for profit. Some of the attitudes were Facebook-level appreciate of data protection law – including one response to a standard ASA complaint which the ASA uncomfortable and gave a few of us a laugh:

In response to a complain to the ASA; which related to an advertiser (AdView & Roxburghe) buying lists of names and email addresses from Indian data traders; failing to verify or ask for explicit consent and then using those peoples details to send them spam advertising the Roxy services

Fortunately two things happened:

  1. I had acquired a lot of skills & some significant experience in technical fields over the years
  2. The more spam I received, the more data & meta-data I was accumulating on spammers

After a couple of years of trying direct action in the UK County Courts (with mixed success) I realised I could use the meta-data to build an email security product which I could then distribute on open-source. I started tinkering with Python and my usual email client, Gnome Evolution – as it allows you to easily create mail filters which call a script.

That evolved into a much wider capability that I’ve piloted on my own mailbox for the last two years or so. All seems to work reasonably well and efficiently. However after visiting a few of the stands at the InfoSec Europe 2019 trade show at Olympia, I realised there’s a lot of companies selling the same or similar platforms for a lot of money.

However none of them seemed to interact with spam email the way I was designing my product.

Which brings us to the subject of this blog – an email security product code-named “Ringo Dingo”, after asking for suggestions from everyone at home. Next time perhaps I’ll pick a random word from a dictionary.

So I needed a way of tracking the random thoughts crossing my brain about it, rather than forget something critical or unusual that would be good to add to the overall capability. I started using my usual kanban, Trello, to log ideas and triage the good stuff from the crap, and have progressed all workable options to boards.

The pilot is pretty much done and dusted, so I started redesigning Ringo Dingo as middleware – which would enable the access from any mail client or MTA. What was missing was a book of progress on the overall piece… which is where this blog comes in.

My other blogs focuses on other topics relating to data protection whereas the this record is purely R&D.