Search This Blog

Wednesday, 5 August 2015

Simulate Network Latency, Packet Loss, and Low Bandwidth on Mac OSX

OSX used to contain the binaries to configure ‘dummynet’ from FreeBSD which has the capability to do WAN simulation.

Mavericks no longer has support for dummynet but still has the code in the backend.  Find and copy the IPFW binary from an older machine into /sbin and you're good to go.


Inject 250ms latency and 10% packet loss on connections between workstation and web server ( and restrict bandwidth to 1 Mbit/s.

# Create 2 pipes and assigned traffic to/from:
$ sudo  ipfw add pipe 1 ip from any to
$ sudo  ipfw add pipe 2 ip from to any
# Configure the pipes we just created with latency & packet loss:
$ sudo  ipfw pipe 1 config delay 250ms bw 1Mbit/s plr 0.1
$ sudo  ipfw pipe 2 config delay 250ms bw 1Mbit/s plr 0.1

$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=63 time=515.939 ms
64 bytes from icmp_seq=1 ttl=63 time=519.864 ms
64 bytes from icmp_seq=2 ttl=63 time=521.785 ms
Request timeout for icmp_seq 3
64 bytes from icmp_seq=4 ttl=63 time=524.461 ms

$sudo  ipfw list |grep pipe
  01900 pipe 1 ip from any to out
  02000 pipe 2 ip from to any in
$ sudo  ipfw delete 01900
$ sudo  ipfw delete 02000
# or, flush all ipfw rules, not just our pipes
$ sudo ipfw -q flush

Round-trip is ~500ms because it applied a 250ms latency to both pipes, incoming and outgoing traffic.

Packet loss is configured with the “plr” command.  Valid values are 0 – 1.  In our example above we used 0.1 which equals 10% packetloss.

Thursday, 2 July 2015

Docker & Consul Lab

In my spare time I've been building a small Docker lab.  I wanted to see what all the fuss is about and also to bring some reality to the kool-aid drinkers in the office.

I've been around long enough to know that theres no magic pill, variations of really good ones have appeared over time but they all need to be mixed with something else.

Docker expands on the Linux LXC built into most kernels from 2.6 which allows a process to exist within its own space within the system.  Similar to virtualisation but without the hypervisor and the overhead a hypervisor brings needing to be all things for all people.

Docker allows you to create & package a container.  Lets say we have a simple JAVA SMTP service.  All the components needed to run that service,  Tomcat, code files run within the container, which can be moved, copied to somewhere else and function in exactly the same manner.

Also comes with a registry, either public or private which acts as a repository for Docker images.  Now you can easily distribute containers or pass them along the dev pipeline to QA, Ops.

DevOps nirvana! The excitement is palatable!

And yes, if your service is 100% self contained then thats a valid statement.

It's when you start to try and build a bigger solution, and this is probably where my inexperience comes in, that you start to think, and find some of the down sides.

Docker deals with networking within the Docker binary.  It serves local DHCP addresses to containers which are then port mapped to the hosts IP.  If a container is moved to a new host, its end point changes.

Intra container communication is via tunnels built between them, not via the network.

How do you find a service?

The answer to that question has already been dealt with by others doing true SOA or web scale.  Write a service registry, use queuing, load balancers/API, zoo keeper.  It's a problem thats been solved by anyone doing dynamic scale but this tweet/blog post:

Led me to look at Consul

At its core is a really clever service registry.  But also layered with health checks, clustering, multiple locale support that can be queried using an API or via name lookups (DNS) to the Consul service port.  Also able to integrate with something like DNSMASQ to redirect queries, this would allow seamless integration into an existing environment where DNS is being used to locate services.

Consul is a small binary, which in my case is within the container or could just as simply exist on the OS that uses a config file to determine what to register with the consul servers. In my lab its using a static config but in reality you would use a CM, puppet, chef, salt, ansible or automatically generate the config using a handy add-on consul-template.

The local consul binary deals with health-checks, nice, immediately a distributed system.  The consul servers (min of 3) run in a clustered mode which the local consul agent is aware of so theres registry HA built in.

In summary pretty impressed with consul, it's early days but something to keep an eye on.

But back to Docker.

Docker in itself is not yet a one stop shop, maybe its not supposed to be, but other players are entering the game to add to the package and I think will continue to.  Is Docker a death bell to virtualisation? If your a web scale company and all you do is web services then yes, it probably is.  For the enterprise or shops that are not developing apps then probably not.  But you can of course run Docker on hypervisors.

It also requires the dev teams to shift their model.  I know lots of places are SOA and micro services, but lots aren't.  Docker to them is not that magic pill.

Something that hadn't occurred to me until I watched this talk AppSec is eating security is the security benefits containerization brings.  The host can be a massively cut down OS and each container only contains the bare minimum to run the services.  The service also has no state, its IP is dynamic, it has no fixed abode.  The attack surface is not only reduced it becomes all slippery.  Patching also (in theory, and if you code correctly) a breeze.

But on the flip side :

Docker is potentially a game changer, but not without work and consideration.

A decent book is The Docker Book: Containerization is the new virtualization

Wednesday, 20 May 2015

Firefox on Kali

This dropped into my twitter TL - Installing Firefox on Kali Linux which was perfect timing as I'd tried do this the previous day.

NB// If like me you ignore the part about un-installing Iceweasel you'll end up with an apt-get error message of 'half-installed' for FF.

Go back, remove Iceweasel and then :
apt-get install --reinstall firefox-mozilla-build
I also needed to manually edit the sources.list as the cut and past didn't work.  You entry should look like :
deb all main

Friday, 24 April 2015

Setting System Wide Proxy on Ubuntu

Put your export settings in /etc/environment


Friday, 10 April 2015

OSX Keyboard Media Keys Stopped Working in iTunes

For me the Chrome Google play extension had stolen the functions.

Chrome | Settings | Extensions & scroll to the bottom of the page | Keyboard Shortcuts

Thursday, 2 April 2015

Docker, CentOS & a Proxy all walk into a pub

CentOS 7.1 behind SQUID Proxy.  Docker install using YUM.

docker info

FATA[0000] Get http:///var/run/docker.sock/v1.18/images/search?term=apache: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?

Followed the official instructions @ which seemed to fix docker info but subsequent

docker search <name>

would fail with :

FATA[0127] Error response from daemon: Get dial tcp connection timed out

Tried setting the environment variable manually and also running the command inline

https_proxy=http://<server>:<port> docker search <name>

Stumbled across a blog suggesting adding environment variables to /etc/sysconfig/docker

 export HTTP_PROXY HTTPS_PROXY http_proxy https_proxy   

Problem fixed

Tuesday, 3 February 2015

Error when installing Windows 8 "The computer restarted unexpectedly or encountered an unexpected error. Windows installation cannot proceed."

If your getting this error then when presented with the message :
  1. Press SHIFT+F10
  2. In the black command window type : regedit  & enter
  4. Double click on ChildCompletion and change the data from 1 to 3
  5. Press OK on the initial error message

Monday, 12 January 2015

QNAP TS-509 & NetGear GS716T Port Trunk

I don't particularly need a 2GB trunk from the NAS but a recent switch upgrade to a NetGear managed switch, GS716T (which for £120 is bloody good value) gave me the option.

Set the NAS for bonded 802.3ad & created the LAG group on the switch.  Easy.

Two days later I noticed my Windows machines had lost their SMB mounts, Linux boxes all fine. Disabling one of the NAS bonded ports brought it all back.

I suspected some kind of ARP timeout.  Switched the LAG port from STATIC to LACP and all was well.  And has remained so.  No idea why Windows was FUBAR and not Linux and without cracking open Wireshark I can only guess.

Wednesday, 7 January 2015

Wordpress (WPMU) Migrate & Domain Change

Goal : Using my existing provider create a Dev env of a Wordpress multisite deployment.

Prod site was two simple sites, 4-5 pages in each.  Minimal plugins.  Eventually fell upon Duplicator from Life in the Grid.

Documentation is decent and I had no issues until I started the deploy on the destination.  The decompression failed with a PHP error when I ran installer.php.  I followed the FAQ and the manual extraction process which worked fine, upload the decompressed files and archive, run the installer.php and follow the prompts.

Site 'duplicated' and working! ... almost.  The primary network site worked fine second site down.  Changed the url via the WP Network Site Admin and created the equiv subdomain entry which allowed me to browse to the second site.  Progress!

Main site :
Second site :

And then the fall.  Trying to access the WP-Admin for the second site put me into the [in]famous Wordpress login loop.  Heres a nice write up -

Followed a ton of links and tried all the suggestions with no success.  Went through the db with a forensic microscope in case a URL rename had been missed, all with no joy.  Eventually I decided to create a new site and see what the results were.  If it failed I knew it would be more WPMU than the second site setup.

Setting up site3 it was created as .. oh a sub sub domain.  The light bulb went off but I carried on.  Site3 worked fine.

Changed my second site & DNS entry to .. Golden.