On the orchestration of humans and computers in (digital) art history

What are the preconditions of smart Web 2.0 use for art historical research? What is needed to make something of a good collaborative tool? Here is a description of how art historians may become directors in a process of orchestration

In the months following Luis von Ahn’s Google TechTalk on “Human Computation” (July 26th, 2006; see also Von Ahn’s article Games with a Purpose [AHN 2006]) there has been some excitement about the idea that human information handling, accurately attuned to the processing capacities of computers, could open up new deductive powers.

The idea that people can socially construct knowledge with the help of computers is not new of course – we buy books, without bothering about how the patterns in what we pay attention to are being mined [refer to AttentionTrust; accessed 2007/03/29]; we are uploading richly illustrated webpages onto public webservers, pages that within days are being scanned, e.g. for words in the proximity of images, by Google Images’ indexing engines. And here is another example. It is a semantic map, constructed by the author using a tool from the InfoMap Project:

In the Information Mapping project [http://infomap.stanford.edu/; accessed at 2007/03/30] texts from the British National Corpus (BNC), a 100-million word collection, sampled from a wide range of sources of written and spoken English, [http://www.natcorp.ox.ac.uk/, accessed at 2007/03/28] have been searched for frequent co-occuring words. Hits were recorded. The figure above visualizes a network of artist names. In another InfoMap tool clicking on any name in the network activates a search routine, returning 5 (or any other number between 1 and …; the user is given the choice here) related words which are added to the graph. Connections are represented by thin lines. By selectively clicking words (in this case: artist names) repeatedly, the user can interactively construct a network representation of a particular knowledge domain. The authors of various textual samples from the BNC have thus – without being aware of it – participated in the construction of a network considering artistic circles in Paris just before and after 1900.

[By the way. The word “van” in our illustration in fact has edges to “truck”, “car” and “exel”. Not, as you may have expected, to “Gogh”. “Gogh” is represented relatively weak in this automatically generated network. See more? Try it out yourself here.]

The exciting thing about “human-based computation” is that a directed activation of humans may yield useful and processable information and/or (?) knowledge that goes beyond any comprehensive view of the individuals involved. No author had the network visualization shown above in mind during the time of writing his/her text.

So in our example some people write art historical texts, a computer is used to store these texts, the machine analyzes word co-occurences and populates a database with processing results. We just type or click the names of artists, the computer digs up nearest neighbours and formats the assembled data in a clickable map. If we have such a chain of alternating machine routines and informed (human) decisions and activities which finally results in a knowledge product, we have human-based computation.

In traditional computation, a human employs a computer to solve a problem: a human provides a formalized problem description to a computer, and receives a solution to interpret. In human-based computation, the roles are often reversed: the computer asks a person or a large number of people to solve a problem, then collects, interprets, and integrates their solutions. [Human-based computation, from Wikipedia; accessed at 2007/03/27]

But if we can design a game “with a purpose”, a game that sucks up information from players/users, why not develop two games, or even a series of games and – why not? – other interactive interfaces, in which a chain of carefully modeled interactions in well designed contexts finally results in the accumulation of specific knowledge? [The design of such a chain can be designated as orchestration, but I will get back to this below.] The humanities in particular might benefit from these new opportunities, since information processing here heavily relies on human judgments.

Enter mashups. A mashup – the concept originates from the world of music; it denotes a musical genre of songs that consist entirely of parts of other songs [Mashup (music), from Wikipedia; accessed at 2007/03/30] – is an application, accessible via the Web, which is programmed to retrieve and combine content from several data sources (often web published databases). In the processing, input from users may be actively sollicited and merged with data from the sources mined. The creation of a mashup is possible whenever content owners offer a public interface (a so-called website API) to their holdings.

Definition of an API (Application Programming Interface): a set of functions that one computer program makes available to other programs so they can talk to it directly. There are many types of APIs: operating system APIs, application APIs, toolkit APIs and now web site APIs. [ProgrammableWeb: FAQ; accessed 2007/03/29]

Because of the complexity of API use, the technical development of a mashup is no trivial task. It is a programmers affair. But some developments bring it within reach of the non-programming population. The museum sector is beginning to pay attention to the technology. [See Jim Spadaccini’s workshop on Museum Mashups, on the Museums and the Web 2007 conference; accessed 2007/03/29] And for non-programmers interesting Web 2.0 solutions are making it to the market. The most epoch-making is probably Yahoo’s Pipes. [Pipes; accessed 2007/03/29]

– Example: Yahoo Pipes. Still in its infancy.

A long listing of sample mashups – the oldest dating back to 2005-09-14 – can be found at the Programmableweb website. [http://www.programmableweb.com/mashuplist; accessed 2007/03/30] Are there any that may inspire scholars? […] A fine example of a mashup in the humanities domain: [The American Image: The Photographs of John Collier Jr.; accessed at 2007/03/30]… Flickr [people upload & tag photographs; others harvest these images; etc.]

The preceding paragraph was mainly about the technical development of mashups. What about mashup development with respect to content? Is this really worthwhile considering for scholars? I think there is at least one incentive to take up the challenge: mashup construction has largely been dominated by information technology specialists, with often rather unimaginative products as a result. Here, just for instance, you can look at and vote for babes: Flickr Combat (or a specific genre of art). We really need some quality input. I think scholars are willing to get involved in the subject if the opportunities are clearly marked off. Where are the gains to be made?

Creating mashups is about the coordination of people, data, and tasks. At least these factors [and probably more] should be considered in assessing opportunities for scholars:

  • should large numbers of people get involved?
  • has communication about the subject been inadequate up to now?
  • are raw materials (data) available? [e.g. via an API]

[I am afraid the answer to the latter question at present will be negative in most if not all cases. Institutions managing cultural heritage resources should take that to heart.]

– Example: “Principles of Art History”.

Let me give an example in the field of art history, just to show how scholars might be involved in the role of orchestrator.

The early 20th Century art historian Heinrich Woelfflin (1864-1945) in his book “Principles of Art History” proposed a set of complementary pairs of formal concepts, with which he intended to type the visual schemata that determined the production of various artifacts from the 15th to 18th centuries. [WOELFFLIN 1950: 14-16]. These are his Kunstgeschichtliche Grundbegriffe:

  • linear vs painterly
  • plane vs recession
  • closed form vs open form
  • multiplicity vs unity
  • absolute vs relative clarity

The problem with these principles – should they be real… – is that we cannot easily collect samples. In many books and articles published after the first edition of the “Kunstgeschichtliche Grundbegriffe” appeared, art historians must have commented on Woelfflin’s ideas, applying, supporting (or rejecting) his principles. But scholarly information services were not ready yet for adequate information retrieval in this respect. In fact, much scholarly work may be qualified as “lost”. Understanding of art appears only to be possible as an extensive knowing and this knowledge, instead of lying in far away, unknown objects, is hidden from individual reflection behind the endless walls of books in libraries. An objectified experience is again becoming unknown. [Otto K. Werckmeister, Kunstgeschichte als Divination, in: “Ideologie und Kunst bei Marx u.a. Essays”. Frankfurt: Suhrkamp 1974. p.64]

For living art historians things are different. If Woelfflin had a point, contemporary [trained] observers should still be capable of making the distinction between artifacts having a “closed form” and artifacts that rather are “open formed”. One way to find out about today’s state of agreement among art historians could be developing some sort of Cinquecento-Seicento Combat, or perhaps Uffizi-Staatliche Gemäldegalerie Combat, where the task is to compare two paintings and pick the most “closed form” (or “open form”) in a HotOrNot-style voting mashup. The game would yield no sweeping statements however. Above that we would have difficulties finding significant numbers of art historians to do the job.

Enter the orchestration of humans and computers in (digital) art history.

Why not expand the number of observers involved, and organize a chain of judgments in such a way as to have lost of people do what they are good in doing, always alert at arriving at quality results? In other words: can we improve our Cinquecento-Seicento Combat?

This is the scenario of our orchestration: lots of people study Woelfflin’s principles and attempt at visualizing his pairs of formalistic concepts. Than a select group of art historians are invited to assess these visualizations, voting for what they think are the best expressions of Woelfflin concepts. The top rated pairs are subsequently used in a voting system, with lots of non-expert observers (i.e. living souls unfamiliar with Woelfflin’s principles). The results of this [ungoing] voting campaign by ordinary people are presented to [still other] experts again, who are asked to judge the appropriateness of the resulting sets.

But is all this doable for scholars?

It is a hard task to predict to what degree historians of art and other scholars active in the domain of cultural heritage will have to be familiar with [specific] information technologies in the near future. What should a scholar know about information technology to be prolific in teaching and research? The question is relevant/urgent since in higher education we have the task to assure that students become effective users of IT in their future professional careers. Ten years ago it was sufficient to learn how to construct thematic research collections, to acquire some knowledge about image processing and the basics of textual markup and perhaps website development. Since these days professionalizing tendencies in ICT have made it difficult for scholars to keep trace with new developments. But with the coming of the semantic web/web 2.0 the amount of possible cross connections disqualifies isolated efforts to create non-interoperabel thematic research collections.

Of course we must have access to substantial resources, and as I hinted above that will be a problem for some time to come. The photographs of John Collier Jr., accessible via The American Image mashup, are uploaded to Flickr, which means that a Flickr API could be used access content. Jim Spadaccini’s company Ideum took care of programming, and development. But apart from technological knowhow, what is needed to produce successful applications in the humanities? What are the qualities scholars could bring in? What knowledge do they have to offer?

Here is my list:

  • knowledge about cultural artifacts (works of art)
  • knowledge about what people humans find interesting in artifacts
  • knowledge about how computers can keep records and can establish relationships between records
  • knowledge about how large databases can be analyzed and mined
  • knowledge about the social [de-]construction of knowledge
  • knowledge about how the context of assignments influences results
  • knowledge about human motivation

Are there any other priorities for the moment? I think there are. In my opinion the cultural heritage sector must make it their business to introduce worldwide standardized short object identifiers for works of art and artifacts. What I mean is some kind of short key, not a more extensive record describing a work of art as in the Object ID standard. [http://www.object-id.com/; accessed at 2007/03/30] And I don’t mean that we should ensure that digital objects have persistent identifiers. [Digital Preservation for Museums (2004); accessed at 2007/03/30] If in the future we aspire to build intriguing mashups in the arts and humanities, the unique objects we annotate and discuss should be tractable across a diverse collection of databases. There have been initiatives to establish object identifiers for managing intellectual property rights [cf. DigiCULT.Info, Issue 4; accesses 2007/03/30], and in attempts to allow for the quick transmission of information about stolen objects [e.g. The Art Loss Register; accesses 2007/03/30], but as far as I know no serious attempts to reach agreement over short identifiers for works of art with a view to intellectual contextualization have found wide acceptance.

I think we really need a register of worldwide valid IDs for/keys to artifacts.

Be Sociable, Share!

Leave a Reply