Categories
Science

The end of the Information Age?

Last week there was this really interesting article at Heise online, a German tech magazine. It calls for the end of the modern internet, putting it into a postmodern internet. This also tackles the value of information and it’s declining in a postmodern age.

In the traditional way, we talk in information science a lot about the governance of information. The idea behind is in my opinion based on a library: In a library, the goal is to have as much information (books) available as possible in order to give them to people. They read the books and become smarter. At the same time, you know that the books are still kind of expensive, so your ideal is that all books should be free for everyone to read. Then everyone can get smarter and everything will be better.

With the internet, this happened. The only thing that did not happen was that people went to this new library and got the most relevant books. At that time, we as information scientists stepped in and said: „You have to make sure only people who are well-respected can write books in your library and we can teach you how to see this and educate you.“ The issue here was: it was not so easy to find out who is this credible source. It takes a lot of time to learn this and simple heuristics do not work. In science, we use peer-review to make sure we only publish what makes sense, but this also has some flaws.

After this, another phase started: The age of Wikipedia. The promises came true, everyone was able to edit it (although you were more likely to do it if you are a privileged nerd). You had less gatekeepers, but cheap information at a very high level. This idea was so successful that social networks came up, driving this even further: now really everyone with an account was able to publish stuff, get followers and so on. At the same time, the incentives for keeping people at the platform rose in order to bomb them with ads. This had the advantage that people were able to make a living from YouTube, but also people getting highly rewarded for posting conspiracy theories and hate comments.

Another thing happened: Information got so cheap that even if I do not find someone who shares information I like, I can just introduce my own source of information and a „market“ will make sure that the best source wins. The end of this was that the platforms won who were best at selling our attention to ad-companies. And the best way to get our attention seems to be lies (please stop calling it fake-news).

In my opinion, the question of how to fix this is extremely relevant to information science. I do not think we can fix this with better automatic moderating or censoring. There is way too much information out there to do this. We also saw that when Twitter blocked the account of Donald Trump, he started using the next platform.

I see two main barriers in this: first, we do not want to have censorship with old gatekeepers, be it a nation-state, a company or some kind of weird guy at Wikipedia. Second, I also do not see this happen since all of this costs a lot of money. If you have people checking for it, you have to pay them, if an algorithm does it, you have to take care of the errors, if volunteers do it, you create the same hierarchies of gatekeepers you wanted to avoid in the first place.

Therefore I am really curious what comes next. I also think from a theoretical perspective this is very interesting to look and maybe go further to create approaches that work.

Categories
General

Digital Humanities at Hochschule Darmstadt

In summer semester me and Professor Rittberger will be giving a class about digital humanities at Hochschule Darmstadt in the major information science. Here is our syllabus, I will try to upload slides as well (but in German).

We want to give a broad overview about what Digital Humanities mean. There are also other classes dealing with text-mining, so we do not focus so much on it (there are four lessons about it, though)

  1. Introduction to Digital Humanities: What are DH, what can we do with digital methods
  2. Research methods: Qualitative and quantiative methods in social sciences, hermeneutics, virtual research environments
  3. Law, ethics: Basic understanding what law means and what problems it can cause. This leads to data management and open data
  4. XML: Basics about XML, why DTDs are useful, standards like TEI (2 sessions), XML regarding ontologies
  5. Editions and digitalization: What are editions, how can we create them digitally? How do we digitize content?
  6. Basics of Text analysis: Distant Reading, Google n-grams, how new methods in text analysis can help in research
  7. Named-Entity-Recognition: We chose this problem of NLP to give an overview of what can be done using new technology and also to compare approaches from computer science like machine learning with approaches from information science and semantic web
  8. Topic Modelling: Basic introdution and practial usage with R
  9. Network analysis: Basics of network analysis and how to use it for instance for plays. Tool: Gephi
  10. Geoinformation: How can we code geographical data, how can we use it in DH?
  11. 3D-Modelling: What new approaches are there using 3D-Modelling, how can we use it in DH? Tool: Blender