Information bubbles do not have to be a tool for deliberate propaganda and the way they function shows that our ideas of the past are shaped not only by school and pop culture but also by software.

Sometimes methods and tools designed to solve specific problems cause other issues. In 1895 Wilhelm Röntgen published his own study on a new type of radiation, later described by his name. Six years later, Röntgen had already become a Nobel Prize winner, and the subject of X-rays and the possibility of scanning the human body was widely discussed. And not only in medical circles as X-ray generators started to be used at paid live shows where souvenir X-rays were taken, and radiation was even used for depilation. It took some time before its negative effects on body tissues were examined, and today X-rays are used very sparingly, with many safeguards. While the assessment of the effects of careless handling is based on the observation of physical changes in the body (it is then easy for us to see that something is wrong), it is not so easy in the case of software.

Invisible intermediary

When the American journalist and activist Eli Pariser’s book The Filter Bubble: What the Internet Is Hiding from You was published in May 2011, almost 750 million users were already using Facebook. At one of the conferences, Pariser illustrated the filter bubble effect described in the book using just Facebook as an example. Among his friends, there were many people with conservative and right-wing views, that were  far removed from his own political preferences. He wanted, as he said himself, not to close himself off to the voices of people ‘on the other side’ but to observe entries and links that might not fit in with his assessment of reality. At one point, however, that content became less and less visible, and in the end it was difficult for him to find the opinions of his right-wing friends without a special search. What is more interesting, Pariser did not change any of his account settings or remove those people from his friends. The filters updated themselves automatically without giving any information about the changes. The Facebook algorithm took into account the fact that as a user Pariser clicked on the links published on that site leading primarily to content that was in line with his own political preferences. It began to limit the visibility of those  to which he was less likely to respond. After all, Facebook, as a social network and business model, is based on reactions that form the main offer of the service  and a good sold to advertisers.

An article by Mat Honan titled ‘I Liked Everything  I Saw on Facebook for Two Days. Here’s What It Did  to Me’ where the author describes an experiment he subjected himself to.

A few years later, a journalist from Wired magazine decided to carry out a test: for  two days, he ‘liked’ all the content that Facebook showed him, even if it was repulsive or radically opposed to his views. As a result, his friends entries were no longer visible, and the feed was dominated by texts from portals and content published on the accounts of well-known brands. It was impossible to read: there was nothing of interest for him anymore.

These two examples perfectly illustrate the problem with the filter bubble, an effect caused by filtering algorithms.

On the one hand, filters that work unnoticed limit the content we consume  to that which is consistent with our narrowly understood preferences.  On the other hand, the lack of such filters would make many services impossible to use.

The key point is that the algorithms themselves do not value the information in  the way we do. The bubble effect, which Pariser talked about, was not intentionally designed on Facebook, it was created incidentally. This does not mean, of course, that the assumptions that contributed to this situation were not introduced. ‘A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa,’ the Facebook creator Mark Zuckerberg was supposed to say. Facebook filters prioritize information close to your preferences, expressed by thousands of clicks, comments and other activities, including on pages outside  the service. Google works in a similar way – here the algorithm giving search results takes into account more than 50 different factors, including the user’s location, the computer they are using, previous searches and purchases from online shops. Such  a situation does not have to be the result of an ominous need to control information and censorship, it is rather the result of an assumption: effectiveness is more important than openness.

There is too much information to be able to read everything and decide for oneself what is valuable.

It is worth adding that in 2015, Pariser’s theories were weakened by a large Facebook user survey involving over 10 million people (US residents). A report published in the journal Science indicated that it was not only the filtering algorithms but, above all, the homogeneity of a group of friends on Facebook that most affected the visibility  of content that might not agree with our preferences (cross-cutting content).

The cover of the first edition of The Filter Bubble: What the Internet  Is Hiding from You by Eli Pariser (Penguin Press, New York, 2011).

The study also showed interesting estimates – around 20 per cent of the friends of the average Facebook user are people with opposing political views, but the authors of the study analysed that trend only in the opposition of liberal and conservative views, which is characteristic of the US political reality. The effect of the filter bubble was  also supposed to be much smaller than Pariser argued – the algorithms concealed between 6 and 8 per cent of content that did not conform to political preferences.

Historical information bubble

Profiling does not have to be bad – music- and film-streaming services sell not only access to a large database of works but also recommendation mechanisms that allow us to use this collection better. The problem arises when filtering cuts us off from the necessary information and hides the way it formats our knowledge. During election campaigns, filtering can influence voters’ decisions, denying them access to unknown and alternative candidates and viewpoints, and may reinforce false information. It  can also strengthen anti-scientific attitudes, for example by favouring content that underestimates the dangers of an epidemic while blocking information from medical and state sources. A similar process can also take place with regard to history – how we imagine the past and how we understand it.

The filter bubble not only makes our preferences a reality, but also limits  future choices.

It is very easy to give examples of such mechanisms. The user visits the YouTube website and enters the keywords ‘Second World War’ in the search engine. As he had been looking for something about German tanks on Google a few weeks earlier, the algorithm offers him films on armoured weapons and on strategy, tactics and warfare. On YouTube there are many films documenting everyday life in occupied countries, war crimes and the political history of the period, but the proposed materials will primarily include those concerning militaria, without any warnings about the content being merely a cover for political propaganda or undermining historical facts. It is estimated that the algorithm in YouTube is responsible for 70 per cent of the time users spend on the site. It is not without reason that it has been accused of prioritizing videos with sensational images of a chosen subjects and conspiracy theories since its main aim is to keep the user on the site; the videos suggested must be as attractive and engaging as possible in the worst sense of the word. Tristan Harris, who once worked at Google on algorithmic ethics, divides YouTube’s offer into two parts.  The first (the calm section) includes entertainment videos, TV materials, educational recordings and content for children. The other, which includes conspiracy content and hate speech, he describes as ‘crazytown’ – the recommending algorithm prefers that part based on the user’s previous choices. Sometimes we are only interested in entertainment and curiosities but at other times we want source material to develop our knowledge of a subject; the YouTube recommendation system is not able to take this difference into account.

It is difficult to accuse filtering algorithms of deliberate manipulation, but  this does not mean that they cannot be used to deliberately mislead and disseminate historical propaganda, which can also be a tool for influencing electoral decisions.

In the United States, the data of more than 80 million Facebook users was utilized  in 2016 in order to create personal and political profiles and to better manage the political message – the information collected starting from 2013 was made available  to the staff of Donald Trump and Ted Cruz. The profiling algorithm used survey  data provided voluntarily by users within the Facebook application, as well as data collected by Facebook about clicks, comments, the geographical location of the user and pages viewed, among other things. The model used was able to recognize the user’s political preferences with 85 per cent accuracy, even if they did not declare themselves explicitly. The Cambridge Analytica affair has shown how important even the most trivial social media activities can be. Liking specific pages (including those devoted to historical topics) can be correlated by profiling algorithms with specific political views – such profiled users become potential recipients of targeted advertising and manipulative actions.

Eli Pariser – activist, writer and author of the term ‘the filter bubble’ against the backdrop of a chart illustrating the phrase he has coined, 2012.

Currently, there is increasing social and public pressure on large online platforms  to better adapt their filtering algorithms and increase the transparency of their decisions. However, concerns about the state of democracy and the spread of conspiracy theories and terrorist content are accompanied by questions about freedom of expression and the subjectivity of users on those sites – if not blind algorithms, then who should decide what content we see there? Since software  has become such an important intermediary in our social life and the acquisition of knowledge about the world, the mechanisms of its operation should remain under appropriate public control. However, each of us, too, can take care not to plunge  into the filter bubble – it is essential to guard our own privacy on the internet, to use browsers that block user tracking scripts and search engines that ignore profiling. Above all, however, it is important to keep your distance from information found online and it is worth considering twice before passing it on. Let us assume that every choice we make online is tracked and affects what we see later.