Jacques Mattheij

Technology, Coding and Business

Dark Patterns, The Ratchet

A dark pattern in the design world means something that is purposefully created to mislead users and to get them to perform actions against their own interest.

Since we are supposedly required to consent to giving up our privacy there has developed a cottage industry of entities doing everythign they can to obey the letter of the law while at the same time ignoring the spirit of it, or in fact turning the law on its head. Consent forms are the norm, picture your typical consent form as a very large chunk of barely legible text with a bright orange ‘Yes, give me that benefit’, and another one (much less inviting looking) saying ‘no thanks, I won’t consent’.

If that were it, then maybe it wouldn’t be so bad. But that’s really only the beginning. A good chunk of the consent forms is missing that second option, your only alternative is to figure out how to close the damn dialog without accidentally clicking the button. An even nastier version employs what I call ‘The Ratchet’. Instead of saying ‘no, no consent’ the other button will say something like ‘not now’. Just like a toddler that you tell no, no cookie the toddler will iterpret the no as ‘not now’, making it ok to ask again. 30 seconds later.

And this will continue until one day you either out of sheer frustration or by accident click ‘Yes, give me that benefit’. And instantly all memory of the consent obtained will be forgotten. There is no way back. That ‘click’ of your mouse (or tap of the finger) was the ratchet advancing one little step. Good luck finding the permission you have just about irrevocably changed in a mountain of convolution designed to lead you astray in your quest to undo your action.

Of course it would be trivial to have a log of recently given permissions and an ‘undo’ option for each of those. But there is no money in there and so you won’t find it. The ratchet has clicked and that’s all that matters, you ‘gave your consent’, time to move on. And so the company gets to claim that not only did you give your consent, you gave it willingly and obviously they would have never ever used your data without that consent.

The cumulative effect of all those little ratchets on your privacy is rather terrible, but there is no denying it: you were along for the ride and you had the option to ‘opt out’ at every step. Not just from that one dialogue: from all of them by refusing to use products that are fielded by companies that engage in these unethical practices. Say no to ‘The Ratchet’ and it’s sick family of dark patterns designed to little-by-little chip away at your privacy, kick products and companies like that to the kerb where they belong. That’s the only way to really opt out.

E-Stop and Fuel, software that keeps you awake at night

Computer code that I’m writing usually doens’t keep me up at night. After all, it’s only bits & bites and if something doesn’t work properly you can always fix it, that’s the beauty of software.

But it isn’t always like that. Plenty of computer software has the capability of causing material damage, bodily harm or even death. Which is why the software industry has always been very quick to disavow any kind of warranty for the product it creates and - strangely enough - society seems to accept this. It’s all the more worrysome because as software is ‘eating the world’ more and more software is inserting itself into all kinds of processes where safety of life, limb and property are on the line.

Because most of the sofware I write is of a pretty mundane type and has only one customer (me) this is of no great concern to me, worst case I have to re-run some program after fixing it or maybe I’ll end up with a sub-optimal result that’s good enough for me. But there are two pieces of software that I wrote that kept me awake at night worrying about what could go wrong and if there was any way in which I could anticipate the errors before they manifested themselves.


The first piece of software that had the remarkable property of being able to interfere with my sleep schedule was when you looked at it from the outside trivially simple: I had built a complex CAD/CAM system for metalworking, used to drive lathes and mills retrofitted with stepper motors or servos to help transition metal working shops from being all manual to CNC. I have written about this project before in ‘the start-up from hell’, which if you haven’t read it yet will give you some more background. The software was pretty revolutionary, super easy to use and in general received quite well. The electronics that it interfaced to were for the most part fairly simple, the most annoying bit in the whole affair was that we were using a glorified game computer (an Atari ST) to drive the whole thing.

The ST, for its time was a remarkable little piece of technology. It came out of the box with a 32 bit processor (Motorola 68K), a relatively large amount of RAM (up to 1M, for the time that was unheard of at that pricepoint), and all kinds of ports coming out of the back and the side of the machine. Besides the usual suspects, Centronics and Serial ports, the ST also came with two MIDI ports, two joystick ports, a harddrive connector and some other places to stick external peripherals into.

As the project progressed all these ports were occupied one-by-one until there was no socket left unused to feed information to the CPU or to take control signals back out. The first design iteration we only had two stepper motors to drive, which was trivially done with the Centronics port. A toolchanger then occupied some more bits on the Centronics port creating the need for some off-board latches. These were then immediately used to drive a bunch of relays as well to give us the ability to switch coolant pumps, warning lights and other stuff on and off. Analogue IO went to one of the MIDI ports and an encoder to determine the position of the spindle went to the remaining i/o port. Before long everything was occupied. And then, with the software stable and all hardware I/O available already in use we determined the need to have a software component to the E-Stop circuitry.

This is not some kind of overzealous form of paranoia, some of the servos we’d drive from these boards were as large as buckets and would be more than happy to crush you or rip your arm off, one of the machines we built drove a lathe with a 5 meter (15’) chuck to cut harbour crane wheels. Fuck-ups are not at all appreciated with gear like that.

So, this was bad. There literally wasn’t a single IO port left that we could have used for this in a reliable way without having to take out a piece of functionality that was already in use at various customers. And yet, the E-Stop circuitry was a hard requirement, for one the local equivalent of OHSA would not sign off on the machine without it (and rightly so), for another we overly relied on our end-users keeping their wits about them while using the machine, which is something that is a really bad idea when it comes to dealing with the aftermath of what quite possibly involved shutting down a chunk of dangerous machinery to avoid an accident. After all the ‘E’ in E-stop stands for ‘Emergency’, and once that switch gets pushed you should not assume anything at all about the state of the machine.

So, this was a bit of a brain teaser: how to reliably restart the machine after an E-stop condition is detected. The E-stop mechanism itself was super simple: mushroom switches wired in series were placed at strategic points on the machine and the equipment case, pressing any switch would latch that switch in the ‘off’ position (E-stop engaged) and to release the switch you had to rotate it. Breaking the circuit caused a relay to fall off which cut the power to all hardware driving motors, pumps and so on. That way you could very quickly disable the machine but you had to make a very conscious decision to release the E-stop condition.

The hard part was that once that situation had come and gone if you did not have each and every output port in a defined state releasing the E-stop switch would most likely lead to an instantaneous replay of the previous emergency or potentially an even bigger problem! Better yet, if the power had been cut to the computer itself the boot process was not guaranteed to put sane values onto all output ports causing the machine to malfunction the instant you tried to power it up. Staying in sync between the state of the hardware and the software was a must. Once the CAM software was up and running and had gone through its port initialization routine you could enable the relay again but there was no output port available to do this, and even if there was using a single line to drive relay would likely cause it to be activated briefly during a reboot, something you really do not want on a relay wired to ‘hold’ itself in the on position.

After many sleepless nights I hit on the following solution: analyzing the output of the 8 bit centronics port after many power up cycles it was pretty clear that even though the outputs were terribly noisy they were also quite regular. This held across all the ST’s that I could get my hands on (quite a few of them, we had a whole bunch of systems ready to be shipped). Knowing that it suggested that instead of using single lines a pattern could be made that would not occur at all while the machine was ‘off’, and that was impossible to generate when the machine was ‘on’ (to avoid accidentally triggering the sequence during an output phase). A pattern clocked out via the normal Centronics sequence (load byte, trigger the ‘strobe’ line, wait for a bit, next byte) and a decoder and a bunch of hardwired diodes to 8 bit input comparators took care of the remainder. That way the relay would never switch into the ‘on’ state by accident during a reboot or cold boot, no single wire output state could drive the relay and no regular operation could accidentally trigger the relay. All conditions satisfied.

Some of those machines sold in the 80’s are still alive today and are still running production, quite a few of them were sold and the successor lines are still being sold today (though with completely redesigned hardware and software).


The second piece of software that kept me awake at night was a fuel estimation program for a small cargo airline operating out of Schiphol Airport in the Netherlands.

Writing software for aviation is a completely different affair than writing software for almost every other purpose. The degree of attention to correct operation, documentation, testing, resilience against operator error and so on is eye-opening if you have never done it before. I landed this job through a friend of mine who thought it was right up my alley: a really ancient system running a compiled BASIC binary on a PC needed to be re-written because the source code could not be produced, and even if it could be produced the hardware was being phased out and the development environment the software was based on no longer existed (the supplier of that particular dialect of BASIC had gone out of business). The software was getting further and further behind, the database of airports that it relied on was seriously outdated (which is a safety issue) and so on.

So this was to be a feature-for-feature and bit identical output clone of the existing system, but with auditable source code, up-to-date data and on a more modern platform. The first hurdle was the compiled binary, a friend of mine (who at the time worked at Digicash) and the person that landed me the job brought some substantial help here, they decompiled the binary in record time giving us a listing of the BASIC source code. This was a great help because it at least gave good insight into what made the original program tick, but it also showed how big the job really was: 10K+ lines of spaghetti basic interspersed with 1000’s of lines of ‘DATA’ statements without a single line of documentation to go with it to indicate what was what. The only thing that helped is that we could figure out where the various input screens were and what fields drove which parts of the computation.

To make it perfectly clear: if this software malfunctioned a 747 with an unknown load of cargo and between 5 and 7 crew members would take an unscheduled dip into some ocean somewhere so failure was really not an option and given the state of the available data it seemed that failure was very much a likely outcome.

Weeks went by, painstakingly documenting each and every input into the program, which variables held what values (after the decompilation process all variable names were two letters long without any relation to their meaning), allowed ranges (sometimes in combination with other fields), what calculations were done on them and which part of the output resulted. Complicating matters was that the BASIC runtime contained its own slightly funny floating point library which could make it very hard to get identical output from two pieces of code that should have done just that. For each and every such deviation there had to be a chunk of documentation explaining exactly what caused the deviation and what the range of such deviations would be and how this could affect the final estimate.

I learned a lot about take-off-weights, cruising altitude, trade winds, alternate airports, the different types of engines 747’s can be fitted with and how all these factors (and 100’s more) affected the fuel consumption. I have never before or after written a piece of software with so many inputs to produce only one number as the output: how much fuel to take on to fulfill all operational, legal and safety requirements, and to be able to prove that this is the case. In the end it all worked out, the software went live next to the old software for a while, consistently did a better job and eventually someone pulled the plug on the old system. No cargo 747’s flown by that company ever ended up in unscheduled mid-ocean or short-of-the-runway stops due to lack of fuel (or any other cause).

What I also learned from this job, besides all the interested details about airplanes and flying, is that you can’t take anything for granted when it comes to computing critical values. You need to know all the details of the underlying runtime environment, floating point computations, the hardware it runs on and so on. Then and only then can you sleep soundly knowing that you have ruled out all the elements that could throw a wrench in your careful computations.

In closing

I liked both of these jobs quite a bit, even though in the first the environment was (literally) murderous I learned a tremendous amount and I knew going out of those jobs I had ‘levelled up’ as a programmer. Even so, working on software when there are lives on the line is something that really opened my eyes to how incredibly irresponsible the software industry in general is when it comes to the work product. The ease with which we as an industry disavow responsibility, how casually we throw beta grade software out there that interacts directly with for instance vehicles and how little of such software is auditable has me worried.

This time it is not my software that keeps me up at night, but yours. So if you are working on such software, take your time to get it right, make sure that you have thought through all the failure modes of what you put out there and pretend that your spouse, children or other people whose lives you care about depends on it, one day it just might.

And if my words don’t convince you then please read up on the Therac 25 debacle, maybe that will do the job.

The Web in 2050

If you’re reading this page it means that you are accessing a ‘darknet’ web page. Darknets used to refer to places where illicit drugs and pornography were traded, these days it refers to lonely servers without any inbound links languishing away in dusty server rooms that people have all but forgotten about. Refusing to submit to either one of two remaining overlords these servers sit traffic less and mostly idle (load average: 0.00) except for when the daily automated back-up time rolls around. Waiting for a renaissance heralded by the arrival of a packet on port 80 or 443 of the WWW as it once was known, a place where websites freely linked to each other. Following a link felt a bit like biting into a chocolate bon-bon, you never quite knew whether you were going to like it or be disgusted by it but it would never cease to surprise you.

In 1990, when the web was first started up, there was exactly one website, http://info.cern.ch . There was nothing to link to, and nobody linking in, the page was extremely simple without fancy graphics or eye candy, just pure information. Much like this lonely server here today. In January 1991, so pretty soon after, this was followed by the first webservers outside of CERN being switched on, making it possible for the first time to traverse the ‘web’ from one institution to another.

It was about as glamorous as you’d expect any non-event to be, but the consequences would be enormous. To reference another darknet site, the long since defunct w3 standards body, whose website which is miraculously still available In March of 1993 web traffic accounted for 0.1% of all traffic, by September it was 1%, with a grand total of 623 websites available at the end of the year. Anybody with a feeling for numbers seeing these knows what’s coming next, by the end of 1994 there were 10,000 websites and a year later the number was 25000. We skip a few years to 2014 when there were a bit under a billion websites live.

So, here we are in March 2050 and as of yesterday, when Amazon gave up the fight for the open web and decided to join Google there are only two websites left. What went wrong?

Two companies deserve extra attention when it comes to murdering the web: Google and Facebook, as you all know the last two giants standing after an epic battle that lasted for decades for control of the most important resource all of us consume daily: information. Whether you’re a Googler or a Facebookie, we can all agree even if our corporate sponsor might not that it seems as if there is less choice these days. If you’re under 30 you won’t remember a time before Google or Facebook were dominant. But you will remember some of the giants of old: Microsoft, The New York Times, The Washington Post, CNN and so on, the list is endless. If you’re over 50 you might just remember the birth of Google, with their famous motto ‘Don’t be Evil’. But as we all know the road to hell is paved with good intentions, and not much later (in 2004) Facebook came along with their promise to ‘connect the world’. Never mind that it was already well connected by that time, but it sure sounded good.

Goofy kid billionaires and benevolent corporations, we were in very good hands.

But somewhere between 2010 and 2020 the tone started to change. The two giants collided with each other more and more frequently and forcefully for control of the web, and in a way the endgame could already be seen as early as 2017. Instead of merely pointing to information Facebook and Google (and many others, but they are now corpses on the battlefield) sought to make their giant audiences available with them as the front door only. Many tricks, some clean and some dirty were deployed in order to force users to consume their content via one of the portals with the original websites being reduced to the role of mere information providers.

This was quite interesting in and of itself because we’d already been there before. In the 80’s and 90’s there was a system called Viditel in Europe (and called ‘minitel’ in France) which worked quite well. The main architecture was based around Telecommunications providers (much like Google and Facebook are today, after their acquisition war of the ‘roaring 20’s’ left them in the posession of all of the worlds Telcos and by running into the ground the ones that wouldn’t budge through ‘free’ competition subsidized from other revenue streams). These telco’s would enter into contracts with information providers which in turn would give the telco’s a cut of the fee on every page. In a way today’s situation is exactly identical with one twist, the information providers provide the information for free in the hope of getting a cut of the advertising money Google or Facebook make from repackaging and in some cases reselling the content. The funny thing - to me, but it is bittersweet - is that when we finally had an open standard and everybody could be a publisher on the WWW we were so happy, it was as if we had managed to break free from a stranglehold, no longer wondering whether or not the telco would shut down our pages for writing something disagreeable, no more disputes about income due without risking being terminated (and after being terminated by both Google and Facebook, where will you go today, a darknet site?).

Alas, it all appears to have been for nought. In the ‘quiet 30’s’ the real consolidation happened, trench wars were being fought with users being given a hard choice: Join Google and your Facebook presence will be terminated and vice versa. The gloves were definitly off, it was ‘us’ versus ‘them’. Political parties clued in to this and made their bets, roughly half ended up in Google’s bin and the other half with Facebook. Zuckerberg running for president as a Republican candidate in the United States more or less forced the Democratic party to align with Google setting the stage for a further division of the web. Some proud independents tried to maintain their own presence on the web but soon faded into irrelevance. Famillies split up over the ‘Facebook or Google’ question, independent newspapers (at first joining the AMP bandwagon not realising this was the trojan horse that led to their eventual demise) ceased to exist. The Washington Post ending up with Team Google probably was predictable and may have been a big factor in Facebook going after Amazon with a passion. One by one what used to be independent webservers converted into information providers to fewer and fewer silos or risked becoming completely irrelevant. Regulators were powerless to do anything about it because each and every change was incremental, ostensibly for the greater good and after all: totally made out of free will.

The ‘silent 40’s’ were different. No longer was there any doubt about how this would all end, if the battle for the WWW had been in full swing a decade earlier this was the mopping up stage, the fight for scraps with one last big prize left. An aging Richard Stallman throwing his towel into the ring and switching off stallman.org rather than declaring allegiance to either giant was a really sad day. Amazon fought to the bitter end, trying to stay in the undivided middle ground between Google and Facebook. But dwindling turnover and Facebook’s launch of a direct competitor (‘PrimeFace’) forced their hand yesterday. And now with Google sending free Googler t-shirts to all of the former members of team-Amazon it comes to a close.

Well, maybe. There is still this website and info.cern.ch is also still up. So maybe we can reboot the web in some form after all, two sites can make a network, even if they don’t have users. Or are there? Is anybody still reading this?

Sorting 2 Tons of Lego, Many Questions, Results

For part 1, see here. For part 2, see here


The machine is now capable of running un-attended for hours on end, which is a huge milestone. No more jamming or other nastiness that causes me to interrupt a run. Many little things contributed to this, I’ve looked at all the mechanics and figured out all the little things that went wrong one by one and have come up with solutions for them. Some of those were pretty weird, for instance very small Lego pieces went off the transport belt with such high velocities that they could end up pretty much everywhere, the solution for this was to moderate the duration of the puff relative to the size of the component and to make a skirt along the side of the transport belt to make sure that pieces don’t land on the return side of the belt which would cause them to get caught under the roller.


The machine is now twice as fast, this due to a pretty simple change, I’ve doubled the number of drop-off bins from 6 to 12, which reduces the number of passes through the machine. It now takes just 3 passes to get the Lego sorted in the categories below. This required extending the pneumatics with another 6 valves, another manifold and a bunch of wiring and tubing, the expanded controller now looks like this:

I’ve also ground off the old legs from the base (the treadmill) and welded on new ones to give a little bit more space to accomodate the new bins, but that’s pretty boring metal work.


The image database is now approximately 60K images of parts, this has had a positive effect on the accuracy of the recognition, fairly common parts now have very high recognition accuracy, less common parts reasonably high (> 95%), rare parts are still very poor but as more parts pass through the machine the training set gets larger and in time accuracy should improve further. Judging by the increase in accuracy from 16K images to 60K images there is something of a diminishing rate of return here and it will likely be that by the time we reach 98-99% accuracy there will be well over a million images in the training set.


I’ve reduce the image size a bit in the horizontal dimension, from 640 pixels wide to 320 wide. Now that I have more data to work with this seems to give better results, the difference isn’t huge but it is definitely reproducible. The RESNET50 network still seems to give the best compromise between accuracy and training speed. I’ve added some utilities to make it easier to merge new samples into the existing dataset, to detect (and correct) classification errors and to drive the valve solenoids in a more precise way to make sure that valves reliably open and close for precisely determined amounts of time. This also helps a lot in making sure that parts don’t shoot all over the room.


Overall the machine works well but I’m still not happy with the hopper and the first stage belt. It is running much slower than the speed at which the parts recognizer works (30 parts / second) and I really would like to come up with something better. I’ve looked at vibrating bowl feeders and even though they are interesting they are noisy and tend to be set up for one kind of part only. They’re also too slow. If anybody has a bright idea on how to reliably feed the parts then please let me know, it’s a hard problem for which I’m sure there is some kind of elegant and simple solution. I just haven’t found one yet :)


The project has had tremendous coverage from all kinds of interesting publications, IEEE Spectrum had an article, lots of internet based publications listed or linked it (for instance: Mental Floss, Endgadget, Mashable). If you have published or know about another article about the sorter please let me know and I’ll add it to the list.

Of course I have this totally backwards

Starting by buying Lego, sorting it and then thinking about how to best sell the end result is the exact opposite of how you should approach any kind of project, but truth be told I’m in this far more for the technical challenge than for the commercial part. That leaves me with a bit of a problem: The sorter is working so well now that I am actually sitting on piles of sorted Lego that would probably make someone happy. But I have absolutely no idea if the sort classes that I’m currently using are of interest.

So if you’re a fanatical Lego builder I’d very much like to hear from you how you would like to buy Lego parts in somewhat larger quantities, say from 500 Grams (roughly one pound) and up. Another thing I would like to know is where you’re located so I can figure out shipping and handling.

As you can see right now the sorting is mostly by functional groups, slopes with slopes, bricks with bricks and so on. But there are many more possibilities to sort lego, for instance by color. Please let me know in what quantity and what kind of groupings would be the most useful to you as a Lego builder.

My twitter is at twitter.com/jmattheij and my email address is jacques@mattheij.com, here are some pictures of what the current product classes look like, but these can fairly easily be changed if there is demand for other mixes.



Space and Aircraft:


Wedge plates:

Vehicle parts:


1 wide bricks:

1 wide plates, modified:

1 wide plates:

Hinges and couplers:

Minifigs and minifig accessories:

2 wide plates:





Plates 6 wide:

Plates 4 wide:


Bricks 2 wide:

Doors and windows:

Construction equipment:



Bricks 1 wide, modified:

Macaroni pieces:

Corner pieces:





Helicopter blades:

Stepped pieces:

I Blame The Babel Fish

One of my favorite writers of all time, Douglas Adams has a neat little plot device in that wholly remarkable book ‘The Hitch Hikers Guide to the Galaxy’, called the Babel Fish.

Let me quote the master himself to explain the concept of the Babel Fish to you if you’re not already aware of it:

“The Babel fish is small, yellow, leech-like, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier, but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centres of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish.”

“Now it is such a bizarrely improbable coincidence that something so mind-bogglingly useful could have evolved purely by chance that some thinkers have chosen to see it as a final and clinching proof of the non-existence of God.”

“The argument goes something like this: ‘I refuse to prove that I exist,’ says God, ‘for proof denies faith, and without faith, I am nothing.’ ‘But, says Man, the Babel fish is a dead giveaway, isn’t it? It could not have evolved by chance. It proves you exist, and, by your own arguments, you don’t. QED.’ ‘Oh dear,’ says God, ‘I hadn’t thought of that,’ and vanishes in a puff of logic. ‘Oh, that was easy,’ says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.”

“Most leading theologians claim that this argument is a load of dingo’s kidneys, but that didn’t stop Oolon Colluphid making a small fortune when he used it as the theme of his best-selling book, Well That About Wraps It Up For God.”

“Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.”

So, now that you have the general idea of what the Babel Fish was all about, I want you to keep an eye on that last part of the entry in the guide, especially the ‘more and bloodier wars’ bit combined with the ‘removing barriers to communication’.

I’ve seen a question posed in more than one place and that sort of pattern tends to trigger my curiosity. The question has two components: Why is the world moving towards a more authoritarian kind of rule all of a sudden, and why is this happening now.

Me, I blame the Babel Fish. Let me explain. Since 1995 we’ve been working very hard at removing those barriers to communication. There used to be a degree of moderation and a lower bound to the cost of communication, especially across longer distances and to larger numbers of people. It’s one thing to have a thought in your head, quite another to communicate that thought at the long-distance or international rates of 1990 or so no matter how important you think it is and even worse if you want to tell more than one person. But that has changed - dramatically.

The cost of almost all forms of communication, written, voice, video, worldwide to an unbelievably large audience is now essentially zero. The language barrier is still there but automatic translation is getting better and better and it won’t be forever or we really can communicate with everybody, instantaneously. That kind of power - because it is a power, I don’t doubt that one bit - comes with great responsibility.

If what you say or write is heard only by people already in your environment, who know you and who can apply some contextual filters then the damage that you can do is somewhat limited.

But if you start handing out megaphones that can reach untold millions of people in a heartbeat, and combine that with the unfiltered, raw output and responses of another couple of million of people then something qualitatively changes. The cost drop from $0.50 / minute long distance, a photo copy of your manifesto or airtime on a radio station to $0 is far more than a quantitative change. It means that unfiltered ramblings and polarized messages from people that you’d normally have no contact with have immediate access to your brain, and in a quantity that even the most balanced person would find hard to resist. It’s an incessant barrage of updates from all over the globe (this blog is one such input, and you’re reading it, right?). So suddenly the word of some agitator or angry person carries roughly the same weight as a well researched article in a respected newspaper. Our brains do not have a ‘quality of source’ meta-data setting, they either remember the data or they don’t, and before you know it one grade of bullshit starts to re-inforce another and then your brain is polluted with garbage.

You might feel that you are able to process all this information with care but I highly doubt that is effective in the long run, just as there is no such thing as ‘bad advertising’, as long as a brand is seen or heard about it will take root, even if that root is started from a negative position we are still exposed and to some extent defenseless. Do this for a decade or two and the world will change and I firmly believe that is what we are witnessing, and that Douglas Adams totally nailed it when he wrote that removing barriers to communication could become the cause of conflict.

In the present that conflict takes the form of polarization, of splitting harmonious groups of people into camps, and it doesn’t really matter what causes the split. People that are split tend to be much easier to manipulate, to get them to do stuff against their own interest, get them to support causes that they would not support if they were capable of pausing for long enough to think things through, as used to be the norm.

So, to make it specific, this reduction in cost has made it possible to do a number of things:

  • it allows the manipulation of public opinion on a vast scale

  • it allows this from all over the globe to everywhere else

  • it makes it possible for single individuals to communicate broadcast wise with millions of recipients without any kind of filter

  • it allows the creation of echo chambers so vast that it seems as if the whole world is that chamber and has become representative of the truth

  • it levels the value of what used to be in print, which required the collusion of a large number of people against the word of an individual

  • it allows the people on both sides of an argument to duke it out directly

  • all this happens on a moments notice

If you look at the past, there are other examples of really bad cases of manipulation of public opinion. And those led to predictable and very bad consequences. Today we no longer need large amounts of capital to buy a printing press or a television satellite or radio transmitter, all it takes to wreak havoc worldwide and to put people up against each other is an internet connection.

In closing, I know Douglas Adams wrote fiction, but he also was a very smart cookie. Removing barriers is generally good, and should be welcomed. But we also should be aware that those barriers may have had positive sides and that as a species we are not very well positioned to deal with such immense changes in a very short time. We seem to need some time to react, time to grow some thicker skin lest we’re overly vulnerable and allow ourselves to be goaded into making big mistakes, such as accidentally empowering authoritarian regimes, which tend to be very capable when it comes to using communications systems for propaganda purposes.

Great power comes with great responsibility, the power to communicate with anybody instantaneously at zero cost is such a power.

Edit: HN User tarr11 linked this piece by DNA about the internet (some users report the link does not work but it works for me, strange).

Edit2: And HN User acabal points out that the fact that anonymity is so easy to come by is also an important factor.