Introduction
Helping young users search for information on the web is a big challenge for the field of Information Retrieval. Young users of search engines experience difficulty searching for the information they wish to access, especially children aged 12 and lower. More than 60% of children between 9 and 12 years old report needing help from time to time with information acquisition, and this number grows to 90% in children between 5 and 8 years old. Observed shorter average query length together with larger usage of natural language indicate that the use of keywords to generate a specific query is a problem in this target group.
Children's queries have an informational intent, i.e. children query a search engine for information about topics that they assume are available on the web. This in contrast to an adult's queries, which have a more transactional intent. It has been observed that children, in their search with an informational intent, often start reading a result's page from top to finish. They do not exhibit the scanning behaviour that is often seen in adults.
A system that highlights parts of information especially relevant to the child's query can be of use to these children in guiding them to the information they want. Wikipedia is a major source of information for Dutch school children. However, due to a lack of proficiency in scanning a piece of text for relevant information, children will often start to read the Wikipedia article at the top of the page, and continue reading until the information is found. Wikipedia articles contain large portions of text and other content that are not relevant to the specific query, possibly overwhelming a child with information. Children's search behaviour with an informational intent could therefore be improved by providing a system that directs them to those parts of the article's content that are most relevant to the query. The aim of this research was to measure the effects of a new Wikipedia browsing system aimed at children on information retrieval performance.
Grawe
To test this hypothesis a search engine named Grawe (which means 'to dig' in Afrikaans) was developed that aggregated results from a number of different sources (Google, Bing, Youtube, Wikipedia StackOverflow, Tumblr, GitHub, Vimeo, Reddit, Flickr, Nature, and Yummly). These were then ranked and displayed in a list of items based on query relevance. Some features of this basic search engine include a category labeling based on returned results; favicon retrieval; url abbreviation (fully shown on mouseover) for readability; result bundling (ie. 'More results from [domain]..'); and a quick info box for Wikipedia results, recipies, and StackOverflow questions.
Figure 1. Left: Overview of the different features in the Grawe search engine on a sample query. Right: Example view of the StackOverflow quickinfo box on a sample query.
Graaf
The Grawe engine was used as a basis for the design of two versions of a Wikipedia browsing system (dubbed Graaf, the Dutch word for 'dig'). The first being almost exactly a copy of Grawe, but only returning Dutch Wikipedia results this time. The other version was the enhanced browsing structure. Upon entering a query, users would immediately be served with a Wikipedia page (eliminating the need to select the best result out of a list, which is reported to be a problem for young users).
A non-standard, enlarged font to increase readability was chosen to display the content. Tables relating to meta-info and auxiliary content such as edit-links are then stripped. Images are kept, but always displayed on the right side of the textual content. The most eye-catching feature is the highlighting feature. The initial query is first stripped of common words (because of children's heavy reliance on natural language in queries), and an iterator function is then called to find all sentences that contain at least 75% of the remaining keywords and highlight those in a yellow hue. An additional synonym highlighting in blue is applied to find and mark synonyms of words commonly used in informational queries. For example 'big', 'large', and 'size' might by synonyms since a user may search for 'how big is the moon', but also 'how large is the moon', or 'what is the size of the moon'. This highlighting was implemented to guide the vision of the user to relevant parts of the text, and the page would automatically jump to the first piece of highlighted text firstly to a highlighted sentence if available, then to a synonym).
The sidebar functions as a scrollbar, with the additional feature of having a visual overview of where in the text you are currently positioned. It was implemented to aid in maintaining the broader picture of an article's structure, as well as being able to see any other highlighted sentences further down on the page more quickly. A minor feature is the horizontal list of suggestions of other articles at the top of the page. This was implemented to provide the user with suggestions of what else to search for, but also to grant some feedback on the search query. Finally, a dictionary was implemented to help users with understanding difficult terms. Every page was parsed by utilizing the Wizenoze API, after which difficult terms could be identified and easier synonyms supplied.
Figure 2. Overview of the different features in the Graaf search engine on a sample query.
Experiment
A primary school cooperative was enthousiastic to let their students work with the engine, thus enabling thorough testing of the new browsing system. Children of classes 6, 7, and 8 aged between 9 and 13 were tested using the engine. A list of questions was provided for the children, such as 'What was the name of the volcano that erupted in Iceland in 2008 that disrupted a lot of air trafgfic?'. These questions were designed, together with the school teachers, to be easy enough for the children to know what the topic was about, but not too easy for them to know the answer right away (forcing them to use the search engine to find the answer). Children would choose a random question as a starting position, after which they would have 10 minutes to look up the answer to as many questions as they could do in the given timeframe.
Results indicated that the enhanced structure yielded a 40 second improvement in lookup time (158.9s versus 110.8, N=24, p=0.068) per question. More questions were answered in the same timeframe in condition with the enhanced browsing structure. Accuracy is also better in the enhanced structure condition, as students in that group answered more questions correctly.
Figure 3. Left: Mean lookup time per condition, and average amount of questions answered per condition. Right: Answer accuracy for the first two questions respectively.