Tải bản đầy đủ (.pdf) (42 trang)

Interactive Technology and Smart Education ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (5.29 MB, 42 trang )

Journal of Interactive Technology and Smart Education Vol 4 Issue 4 November 2007
Volume 4
Issue 4
November 2007
Volume 4 Number 4 November 2007
SPECIAL ISSUE
Papers from the IEEE International Workshop on Multimedia
Technologies for E-Learning (MTEL)
Gerald Friedland, Lars Knipping and Nadine Ludwig
Guest editorial
Gerald Friedland, Lars Knipping and Nadine Ludwig 179
Vector graphics for web lectures: experiences with Adobe
Flash 9 and SVG
Markus Ketterl, Robert Mertens and Oliver Vornberger 182
Authoring multimedia learning material using open standards and
free software
Alberto González Téllez 192
E-learning activity-based material recommendation system
Feng-jung Liu and Bai-jiun Shih 200
Educational presentation systems: a workflow-oriented survey and
technical discussion
Georg Turban 208
www.emeraldinsight.com/itse.htm
Interactive
Technology
and Smart
Education
PROMOTING INNOVATION AND A HUMAN TOUCH
Interactive
Technology
and Smart


Education
ISSN 1741-5659
ITSE_4-4_Cover.qxd 03/08/08 02:53 PM Page 1
contents
Vol 4 No 4 November 2007
Interactive
Technology
and
Smart
Education
PROMOTING INNOVATION AND A HUMAN TOUCH
SPECIAL ISSUE
Papers from the IEEE International Workshop on Multimedia
Technologies for E-Learning (MTEL)
Gerald Friedland, Lars Knipping and Nadine Ludwig
Guest editorial
Gerald Friedland, Lars Knipping and Nadine Ludwig 179
Vector graphics for web lectures: experiences with Adobe Flash 9 and SVG
Markus Ketterl, Robert Mertens and Oliver Vornberger 182
Authoring multimedia learning material using open standards
and free software
Alberto González Téllez 192
E-learning activity-based material recommendation system
Feng-jung Liu and Bai-jiun Shih 200
Educational presentation systems: a workflow-oriented survey and
technical discussion
Georg Turban 208
VOL 4 NO 4 NOVEMBER 2007
177
Professor Alistair Sutcliffe, University of Manchester, UK

Anne Adams, UCL Interaction Centre, UK
Petek Askar, Hacettepe University, Turkey
Ray Barker, British Educational Suppliers Association, UK
Maria Bonito, Technical University of Lisbon, Portugal
Marie-Michèle Boulet, Université Laval, Canada
Sandra Cairncross, Napier University, UK
Gayle J. Calverley, University of Manchester, UK
John M. Carroll, Penn State University, USA
Chaomei Chen, Drexel University, USA
Sara de Freitas, Birkbeck University of London, UK
Alan Dix, Lancaster University, UK
Khalil Drira, LAAS-CNRS, France
Bert Einsiedel, University of Alberta, Canada
Xristine Faulkner, London South Bank University, UK
Terence Fernando, University of Salford, UK
Gerhard Fischer, University of Colorado, CO, USA
Monika Fleischmann, Fraunhofer Institute for Media Communication, Germany
Giancarlo Fortino, University of Calabria, Italy
Gerald Friedland, Freie Universität Berlin, Germany
Bernie Garrett, University of British Columbia, Canada
Lisa Gjedde, Danish University of Education, Denmark
Ugur Halici, Middle East Technical University, Turkey
Lakhmi Jain, University of South Australia, Australia
Joanna Jedrzejowicz, University of Gdansk, Poland
Joaquim A. Jorge, Technical University of Lisbon, Portugal
Athanasios Karoulis, Aristotle University of Thessaloniki, Greece
Lars Knipping, Freie Universität Berlin, Germany
John R. Lee, University of Edinburgh, UK
Paul Leng, Liverpool University, UK
Anthony Lilley, magiclantern, UK

Zhengjie Liu, Dalian Maritime University, China
Nadia Magnenat-Thalmann, University of Geneva, Switzerland
Terry Mayes, Glasgow Caledonian University, UK
Toshio Okamoto, University of Electro-Communications, Japan
Martin Owen, NESTA Futurelab, UK
Vasile Palade, Oxford University, UK
Roy Rada, University of Maryland, MD, USA
Elaine M. Raybourn, Sandia National Laboratories, NM, USA
Rhonda Riachi, Oxford Brookes University (ALT), UK
Kerstin Röse, University of Kaiserslautern, Germany
Joze Rugelj, University of Ljubljana, Slovenia
Eileen Scanlon, Open University, UK
Jane K. Seale, University of Southampton, UK
Helen Sharp, Open University, UK
Vivien Sieber, University of Oxford, UK
David Sloan, University of Dundee, UK
Andy Smith, University of Luton, UK
Paul Strickland, Liverpool John Moores University, UK
Josie Taylor, Open University, UK
Malcolm J. Taylor, Liverpool University, UK
Thierry Villemur, LAAS-CNRS, France
Weigeng Wang, University of Manchester, UK
Founder and Editor-in-Chief
Editorial Advisory Board
Honorary Advisory Editor
Dr Claude Ghaoui
School of Computing & Mathematical Sciences, Liverpool John Moores University, Byrom Street,
Liverpool L3 3AF, UK. Email:
Guest editorial
Guest Editors:

Gerald Friedland
International Computer Science Institute, Berkeley, California, USA
Lars Knipping
Department of Mathematics, Berlin Institute of Technology, Berlin, Germany and
Nadine Ludwig
MuLF – Center for Multimedia in eLearning and eResearch, Berlin Institute of Technology,
Berlin, Germany
Interactive Technology and Smart Education (2007) 179–181
© Emerald Group Publishing Limited
INTRODUCTION
Ever since the advent of automatic computation devices,
efforts have been made to answer the question of how to
properly integrate them and take advantage of their capa-
bilities in education. Educational multimedia systems
promise to make learning easier, more convenient, and thus
more effective. For example, classroom teaching enriched
by vivid presentations promise to improve the motivation
of the learner. Concepts may be given a perceivable exis-
tence in a video show and the observability of important
details can be stressed. Video capturing of lectures has
become common practice to produce distance education
content directly from the classroom. Simulations allow stu-
dents to explore experiments which would be otherwise
impossible to be conducted physically by students.
Today, almost every university claims to have a strate-
gy to utilize the opportunities provided by the Internet
or digital media in order to improve and advance tradi-
tional education. However, the question about how mul-
timedia can really make education more exploratory and
enjoyable is as yet not completely answered. In fact, we

are just beginning to understand the real contribution of
multimedia to education. For example, various web sites
and lecture videos produced as part of the “e-learning
hype” often do not exploit the full potential of multimedia
for teaching. For example, how can we support participant
interaction in classrooms and lecture halls better? What
are the best tools for the development of educational mul-
timedia material? How can we make the production of
educational material easier and existing application more
reusable?
In addition, new technologies and trends – such as
mobile and semantic computing – open up new and
exciting opportunities for teaching with multimedia and
the creation of multimedia learning material. How can
these new trends in multimedia research be used to
improve multimedia education or education in general?
In order to find answers to these and many other ques-
tions, we organized the second IEEE International
Workshop on Multimedia Technologies for E-Learning
(MTEL) in connection with the 9th IEEE International
Symposium on Multimedia. Based on the success of the
first MTEL workshop in 2006, our goal was to attract
researchers and educators from the multimedia commu-
nity as well as researchers from other fields, such as
semantic computing and HCI, who are working on
issues that could help improve multimedia education as
well as teaching and learning in general. Based on dis-
cussion among these experts with different backgrounds,
the workshop’s aimed to identify new trends and high-
light future directions for multimedia-based teaching.

VOL 4 NO 4 NOVEMBER 2007 179
180 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
The following special issue of Interactive Technology
Smart Education presents four papers that have been
carefully selected by the program committee for publica-
tion in this journal. They have been extended by the
authors according to reviewers suggestions. We hope that
these articles are able to inspire even more creativity in
the overlap between human-centered and technology-
centered research. The following paragraphs provide a
short overview of the selected articles.
Vector Graphics for Web Lectures: Experiences with Adobe
Flash 9 and SVG, presents experiences made during the
development and every day use of two versions of the lec-
ture recording system virtPresenter. The first of these
versions is based on SVG while the second one is based on
Adobe Flex2 (Flash 9) technology. The authors point out
the advantages vector graphics can bring for web lectures
and briefly present a hypermedia navigation interface for
web lectures that is based on SVG. Also, they compare
the formats Flash and SVG and conclude with describing
changes in workflows for administrators and users that
have become possible with Flash.
Authoring Multimedia Learning Material using Open
Standards and Free Software deals with avoiding drawbacks
like license cost and software company dependencies at
distributing interactive multimedia learning materials.
The authors propose using open data standards and free
software as an alternative without these inconveniences.
But available authoring tools are commonly less produc-

tive. The proposal is based on SMIL as composition lan-
guage particularly the reuse and customization of SMIL
templates used by INRIA on their technical presenta-
tions. The authors also propose a set of free tools to pro-
duce presentation content and design focusing on
RealPlayer as delivery client.
E-Learning Activity-based Material Recommendation
System an application to utilize the techniques of LDAP
and JAXB to reduce the load of search engines and the
complexity of content parsing is described. Additionally,
through analyzing the logs of learners’ learning behav-
iors, the likely keywords and the association among the
learning course contents will be conducted or figured
out. In conclusion, the integration of metadata of the
learning materials in different platforms and mainte-
nance in the LDAP server specified.
Finally, Educational Presentation Systems: a workflow-
oriented survey and technical discussion presents an overview of
processes before, during and after an educational presenta-
tion. The different processes are presented in form of a
workflow. The workflow is also used in order to present,
analyze and discuss different systems including their
individual tools covering the different phases of the
workflow. After this overview of systems, the different
approaches are discussed in respect to the workflow. This
discussion provides specific technical details and differ-
ences of the focused systems.
ACKNOWLEDGEMENTS
The Guest Editors wish to thank Claude Ghaoui, ITSE
Editor-in-Chief, and then dedicated reviewers for their

detailed and thoughtful work. They were:
Abdallah Al-Zoubi, Princess Sumaya University for
Technology, Jordan
Michael E. Auer, Carinthia Tech Institute, Austria
Helmar Burkhart, University of Basel, Switzerland
Paul Dickson, University of Massachusetts, USA
Berna Erol, Ricoh California Research Center, USA
Rosta Farzan, University of Pittsburgh, USA
Claude Ghaoui, Liverpool John Moores University, UK
Wolfgang Hürst, University of Freiburg, Germany
Sabina Jeschke, University of Stuttgart, Germany
Ulrich Kortenkamp, Paedagogische Hochschule
Gmuend, Germany
Ying Li, IBM T.J. Watson Research Center, USA
Marcus Liwicki, University of Bern, Switzerland
Robert Mertens, University of Osnabrück, Germany
Jean-Claude Moissinac, ENST Paris, France
Thomas Richter, University of Stuttgart, Germany
Anna Marina Scapolla, University of Genova, Italy
Georg Turban, Darmstadt Institute of Technology,
Germany
Nick Weaver, ICSI Berkeley, USA
Debora Weber-Wulff, FHTW Berlin, Germany
Marc Wilke, University of Stuttgart, Germany
Peter Ziewer, Munich Institute of Technology, Germany
We would like to thank all authors for their quick
revision and extension of the articles presented herein.
Their commitment made it, again, possible to release this
special issue so quickly after the workshop.
REFERENCE

Friedland, G., Knipping, L., and Ludwig N. (2007), “Second
IEEE International Workshop in Multimedia Technologies
for E-Learning”, Proceedings of the 9th IEEE International
Symposium on Multimedia, IEEE Computer Society, Taichung,
Taiwan, pp. 343–95.
ABOUT THE GUEST EDITORS
Dr Gerald Friedland is currently a researcher at the
International Computer Science Institute in Berkeley,
California. Prior to that, he was a member of the multi-
media group of the computer science department of Freie
Universität Berlin. His work concentrates on intelligent
multimedia technology with a focus on methods that
help people to easily create, edit, and navigate content,
aiming at creating solutions that “do what the user
means”. He is program co-chair of the 10th IEEE
Guest editorial
Guest editorial
VOL 4 NO 4 NOVEMBER 2007 181
Symposium on Multimedia and the Second IEEE
International Conference on Semantic Computing. In
addition to the second IEEE International Workshop on
Multimedia Technologies for E-learning, he also co-
chaired the first ACM Workshop on Educational
Multimedia and Multimedia Education. He has received
several international research and industry awards.
Among them is the European Academic Software Award
in 2002, for the creation of the E-Chalk system in coop-
eration with Lars Knipping. He is also member of the
editorial advisory board of ITSE.
Dr Lars Knipping is a researcher at the mathematics

department at Technische Universität Berlin. He belongs
to the board of editors of ITSE and the editorial team of
iJET (International Journal of Emerging Technologies in
Learning). Before joining Technische Universität he
worked as a scientific consultant in a research project for
a state-funded TV broadcaster, the “Sender Freies Berlin”,
followed by positions as researcher and instructor at the
multimedia group at the computer science department of
Freie Universität Berlin and as lecturer in International
Media and Computing at the FHTW Berlin. Dr. Knipping
received his Ph.D. degree for his work on the E-Chalk
system and holds M.Sc. degrees in both mathematics and
computer science.
Nadine Ludwig graduated from Technische Universität
Ilmenau with a degree in Computer Science in 2005. In
her thesis she described the integration of remote labora-
tories in Learning Content Management Systems via
SCORM. Since May 2006 Ms Ludwig has been a part of
the MuLF Center at Technische Universität Berlin as a
research associate. Currently she is working on her
PhD-thesis in the field of Semantics and Modularization
of Learning Objects in Cooperative Knowledge Spaces.
Vector graphics for web lectures:
experiences with Adobe Flash 9 and SVG
Markus Ketterl
Virtual Teaching Support Center, University of Osnabrück, Osnabrück, Germany
Email:
Robert Mertens
Fraunhofer IAIS, Schloß Birlinghoven, Sankt Augustin, Germany
Email: and

Oliver Vornberger
Department of Computer Science, University of Osnabrück, Osnabrück, Germany
Email:
Abstract
Purpose – The purpose of this paper is to is to describe vector graphics for web lectures, focusing on the experiences
with Adobe Flash 9 and SVG.
Design/methodology/approach – The paper presents experiences made during the development and everyday use
of two versions of the lecture-recording system virtPresenter. The first of these versions is based on SVG, while the
second is based on Adobe Flex2 (Flash 9) technology. The paper points out the advantages vector graphics can bring
for web lectures and briefly presents a hypermedia navigation interface for web lectures that is based on SVG. The
paper also compares the formats Flash and SVG and concludes with describing changes in workflows for administra-
tors and users that have become possible with Flash.
Findings – Vector graphics are an ideal content format for slide-based lecture recordings. File sizes can be kept small
and graphics can be displayed in superior quality. Information about text and slide objects is stored symbolically,
which allows texts to be searched and objects on slides to be used interactively, for example, for navigation purposes.
The use of vector graphics for web lectures is, however, a trend that has begun only recently. A major reason for this
is that multiple media formats have to be combined in order to replay video and slides.
Originality/value – The paper offers in insight into vector graphics as an ideal content format for slide-based lecture
recordings.
Keywords: Lectures, Worldwide web, Graphical user interfaces, Presentation graphics, Multimedia, Teaching aids
Paper type: Research paper
Interactive Technology and Smart Education (2007) 182–191
© Emerald Group Publishing Limited
1. INTRODUCTION
Vector based graphics formats offer a number of possi-
bilities for the realization of web lecture interfaces for
slide based talks. One major advantage is that they
support capturing contents in a symbolic manner which
is a requirement for searching text in a recording (Lauer
and Ottmann, 2002). They also offer superior picture

182 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
VOL 4 NO 4 NOVEMBER 2007 183
quality. Last but not least, vector based graphics formats
enable developers to realize a high degree of interactivity
that can be used for implementing advanced navigation
concepts as described in (Mertens et al. 2006d). They also
can be used to tackle a number of layout problems as fur-
ther described in (Mertens et al. 2006b).
Vector graphics are, however, not very common in web
lectures. This article presents the authors’ experience
with two different vector graphics formats: (scalable
vector graphics SVG ) and Adobe’s new Flex 2 (Flash 9
based) technology for content presentation and control in
the web lecture system virtPresenter.
The SVG based version of the lecture recording system
has been used at the University of Osnabrück and at the
University of Applied Sciences Osnabrück since summer
2003. During this time, users with different backgrounds,
knowledge and expectations experienced the system in
every day use. The Adobe Flex 2 based counterpart has been
introduced in February 2007 after a seven month develop-
ment and testing period. This new version is, apart from
small changes concerning further system requirements and
improvements, in productive use since March 2007.
The article is organized as follows: Section 2 points out
the advantages vector graphics can bring for web lectures
and briefly presents a hypermedia navigation interface for
web lectures that is based on SVG. Section 3 describes
experiences with this SVG based interface and points out

difficulties that arose during the use of this interface in a
number of university courses. Section 4 compares Flash
and SVG with respect to their use in lecture recording.
Section 5 introduces the Flash based successor of the
SVG based interface. Section 6 describes changes in work-
flows for administrators and users that have become possi-
ble with Flash. Section 7 briefly summarizes the work pre-
sented in this article and refers to future projects and ideas.
2. ADVANTAGES OF VECTOR
GRAPHICS IN WEB LECTURES
The advantages of using vector graphics for content rep-
resentation in web lectures can be summarized in a couple
of words: vector graphics store content in a symbolic way,
vector graphics can be enlarged without loss of quality
and many vector graphics formats allow for interactive on-
the-fly manipulation of contents. The aim of this section
is to show why these properties of vector graphics are use-
ful by showing how each of them improves web lectures.
2.1 Symbolic Representation of
Contents and Interactivity
The original virtPresenter user interface shown in
Figure 1 was developed to implement a hypermedia
navigation concept for lecture recordings (Mertens,
2007). Hypermedia navigation consists of the five ele-
ments full text search, bookmarks, backtracking, struc-
tural elements and footprints (Bieber, 2000).
Full text search is realized by searching the text of the
slides in the slide overview. Search results are highlight-
ed by an animation that grows and shrinks them repeat-
edly. Both the ability to search in the slides directly and

to animate search results is based on the properties of
SVG (symbolic representation and manipulation on the
fly). Bookmarks are realized as a functionality that
allows for selecting arbitrary passages and storing them
for later viewing or exchanging them with other
students. Backtracking is implemented by storing the
play position whenever the user navigates to another
play position. Thus each navigation action can be
undone. In order to facilitate orientation at the stored
play positions, replay begins at their time index minus
three seconds. Structural elements are realized in two
ways the simple one of which are next/previous buttons
that allow navigating to the next or previous slide or
animation step. A more sophisticate realization of struc-
tural elements is the interactive slide overview imple-
mented in virtPresenter (Mertens et al., 2006c). In the
overview, those parts of a slide that had been animated
during the original presentation when the lecture was
recorded can be clicked on with the mouse. The record-
ing then starts replay at the time index when the respec-
tive animation takes place during the lecture. To realize
these features, the slide documents are analyzed and
script code containing the respective time indices is
added automatically to the animated elements of a slide
(Mertens et al., 2007). The implementation of this step
was relatively easy due to the symbolic representation of
the slide elements in SVG. Footprints serve the purpose
of showing users which parts of a hyperdocument they
have already visited. In classic hypertext, this is done by
colouring visited and non-visited links differently.

Since web lectures are time based media, another
approach had to be found. In virtPresenter, coloured
parts of the timeline indicate that the corresponding
passages of the recording have already been watched by
the user. Multiple visits are indicated by deeper shad-
ings. The footprints are stored symbolically as pairs of
start and end time indices. They are drawn on the fly
when a lecture is watched. This has been realized by
the use of animated SVG rectangles. The different colour
shadings are created by overlapping semitransparent
rectangles.
This brief description shows that the properties of
SVG as a vector graphics format have been crucial for the
realization of the virtPresenter user interface. Especially
the implementation of footprints, bookmarks and full
text search has been facilitated immensely by SVG as a
vector graphics format.
184 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
2.2 Superior Picture Quality
Good picture quality of lecture slides is important even
for standard usage scenarios (Ziewer and Seidl, 2002).
However, it becomes even more important, when the lec-
ture slides are shown on a large screen as in the scenario
depicted schematically in Figure 2.
In this scenario, the lecture is replaced by a cinema-
like session in which the recording of the lecturer and the
slides are presented to the audience on two large screens.
This scenario has been carried out successfully at the
University of Osnabrück a number of times (Mertens

et al., 2005). Since the slides are shown on a large screen,
bad picture quality becomes even more obvious than dur-
ing replay on a standard computer display. At the
University of Osnabrück, the slides used had been in
SVG and had thus been presented in the same quality as
in the original lecture.
3. LESSONS LEARNED
The SVG-based version of the viewer interface was first
developed in 2003 and improved in various steps. The
main focus of the development was to implement the
hypermedia navigation concept for lecture recordings
described in section 2 and in more detail in (Mertens
et al., 2004).
At the time when development of the SVG based ver-
sion began, SVG seemed to be a promising choice for a
content format to be used in lecture recordings. SVG is
an XML based vector graphics format and was expected
to grow in importance. We had expected that SVG ren-
derers supporting the required subset of the SVG stan-
dard would soon become available on more platforms
than Windows and that their performance would increase
in order to rival that of Macromedia Flash (now Adobe
Flash). Things have, however, developed in a different
direction.
While all the features described in (Mertens et al.,
2004) could be realized with a combination of JavaScript,
SVG and Real Video, the technology used lead to a num-
ber of problems in every-day use. Loading and rendering
speed has shown to be a major problem when combining
SVG and Real technology. Table 1 compares slide load-

ing times of the SVG and the Flash based implementa-
tion (further described in sections 4 and 5). It also shows
loading times for an optimized version of the SVG slides
in which background graphics in the slides (logos) had
been deleted to speed up rendering. The testing environ-
ment was a Windows XP system with an AMD Athlon
64 based processor with 2,01 GHz and 1 GB RAM. The
tests were made locally on that system without internet
connection interferences. This test indicates the elapsed
time till a slide object is loaded and fully available in the
main application.
As some interactivity and animation features of SVG
that are only supported in the Adobe SVG Viewer (ASV)
had been used in the interface, replay was only possible
with the ASV for Microsoft’s Internet Explorer (IE). This
viewer plug-in does, however, exhibit low rendering
speeds and support will be discontinued in January 2008.
This fact is especially problematic when many slides have
to be shown at once as it is the case for overviews. Also
switching from one slide to another happens with a
noticeable delay. The Real video player buffers data when
users navigate in the video. This buffering also slows
down the interfaces responding times noticeably.
Another problem with SVG was that the plug-in
required only exists for Microsoft’s Internet Explorer.
Even though Adobe had implemented plug-in-versions
for other browsers, only the one for IE supports the subset
of the SVG specification required for the implementa-
tion. This fact rules out platform independence for the
Figure 1 VirtPresenter 1.0 user interface

Ketterl, Mertens and Vornberger: Vector graphics for web lectures
VOL 4 NO 4 NOVEMBER 2007 185
interface. Last but not least, the fact that plug-ins are
required for both Real Video and SVG poses an obstacle
for first time users of the interface.
The use of the SVG-based interface has been evaluated
in a number of courses. In these evaluations, the above
mentioned points have shown to have a considerable neg-
ative impact on user acceptance. In 2006, three courses
have been evaluated with a questionnaire developed for
the evaluation of e-Learning at the University of
Osnabrück. For abbreviation purposes, these courses are
referred to in the paper as courses A, B and C. Table 2
summarizes relevant details on the courses.
Figure 3 shows how the students judged download
times of the recordings. No actual download was offered.
The term “download times” does thus refer to loading
and rendering times of the viewer interface. By and large
the numbers in the figure do not seem too critical at first
sight. In practice, however, the interface loads consider-
ably longer than other material found on the course web
site. Also, the results show that while the loading times
have been acceptable for most students, they have not
been acceptable for all students.
Figure 4 shows how many students reported problems
using virtPresenter. The problem descriptions were
entered as free text answers in the questionnaires. In
course A, no student reported a problem. This might be
due to the fact that students were given very detailed
instructions. Having a non-technical background, the

students have very likely followed these instructions
closely. The questionnaires have also shown that all stu-
dents in course A used IE. In the other courses, the ques-
tionnaires have shown that some students did not use IE
(even though they had been instructed that using another
web browser would cause problems with the interface). In
contrast to course A, course B and C had been attended
by a number of students with technical backgrounds. The
questionnaires lead to the assumptions that some of these
students, being used to solve problems by trial and error,
have tried to use the interface with other browsers than
IE unregarding the information that it would not work
on these browsers. Seemingly unaware of the fact that the
interface was not supported under these settings, the stu-
dents reported the system behaviour as faults. From one
problem description it even became clear, that the stu-
dent had not installed any SVG viewer.
In order to counter the above described effects, a num-
ber of improvements had been devised for the SVG based
version of the interface. For example, a nearly equivalent
solution with QuickTime video instead of Real video that
also works with SVG for the slide representation and a
Flash 6 based thumbnail overview component for faster
slide loading and interface responding. This approach of
mixing technology did not solve the problems either. The
reason was that the users had to install another plug-in,
QuickTime instead of Real as well as the Flash plug-in.
Table 1 Slide loading with SVG and Flash
Technology SVG SVG optimized Flash
average slide 164* 120* 67

loading time (ms)
430** 243** 81
average slide
loading time
1 video (ms) (Real video) (Real video) (Flash video)
average slide
size (KB) 54 25 28
13 different converted PowerPoint slides System: Windows XP
AMD Athlon 64 Processor;
*outlier here: 520, 635 2,01 GHz, 1GB Ram
**outlier here: 7300, 6349, 2280, 4300
Figure 2 Lecture slides on large screens
186 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
Moreover the reaction time of the interface could not be
improved by this approach.
As a preliminary workaround, plug-in and browser
checks had been added to the original version. These
measures alert users if they try to use the interface with
wrong software settings and thus reduced bug reports
that are due to accessing the interface with wrong soft-
ware setting. Also, a number of enhancements had been
added to avoid unnecessary loading of slides when slide
changes happen at a high frequency.
These approaches have, however, been limited by the
technology setting in which they had been employed. In
order to overcome these problems, we have turned to
Adobe Flex 2 in combination with the Open Source Red5
streaming server backend as described in section 5.
4. TECHNOLOGY REVIEW: FLASH VS.

SVG
In a strict sense, the new interface cannot reach the func-
tion range of the old virtPresenter interface described in
(Mertens et al., 2006a; Mertens, 2007) by now.
This is mainly due to the fact that the new version does
not yet feature an automatically generated thumbnail
slide overview which is crucial to a number of function-
alities implemented in the SVG based version (Mertens
et al., Mertens et al., 2004, 2006d). The thumbnail
overview is used both to visualize the connection of
navigation actions to the structure of a talk (Mertens
et al., 2006d) and to allow structure based navigation on
the level of animations within a slide. The latter is real-
ized by clickable slide elements that allow for direct nav-
igation to the replay position when the corresponding
slide element first appeared on screen during the record-
ed lecture (Mertens et al., 2004).
However, the reimplementation was necessary due to
frequent user problems with unsupported computer plat-
forms, wrong browsers or browser settings or missing
plug-ins. The underlying shared infrastructure (Mertens
et al., 2007) was enhanced to export, besides different
podcast formats flash content (Flash video and Flash
slides) (Ketterl et al., 2006b, 2007a). Adobe’s Presenter
(formerly named Breeze) is now also a part of the auto-
matic lecture recording production chain. This software
component enables a fast PowerPoint to Flash conversion
that could be fully automated as well. This software com-
ponent was selected in this new process due to the fact
that it is reliable and now even affordable for a smaller

university project. Today there are some open source or
commercial PowerPoint to Flash export systems besides
the Adobe product on the market. However, Adobe
Presenter currently seems to be the only system that
fits into our automated production chain. The other
systems could not be integrated in the automatic
production chain as they could not be started from other
Figure 4 User problems
Table 2 Course details
Course Full Name of Course Didactic Setting Number of Students
A Fundamentals of Biblical Theology Lecture took place as usual, all students 25
could attend and the recordings had been
provided as an add on.
B Internet Technologies Lecture took place at one University and
was transmitted to another one. Recordings
were provided as an add on. A more
detailed description of the scenario can be
found in (Hoppe
et al
., 2007). 27
C Managing Innovation and Projects Same as course C. 19
Figure 3 Lecture recordings download times
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
VOL 4 NO 4 NOVEMBER 2007 187
programmes. A Problem with Adobe Presenter is con-
stituted by the fact that this component exports only
Flash 6 slides in the current version. The communication
between old Flash objects and new Flash 9 objects is not
ideal at the moment. Difficult is for example the han-
dling of different old Flash version based slides in a Flash

9 application. A prototype version which also features
slide based navigation is depicted in Figure 5 on the left
hand side.
Nevertheless, the time for post processing (video and
slide conversion and slide text analysis and building all
the required software files for the interface could be
reduced from previously about three hours down to only
about one hour for a 1.5 hour lecture. Of particular
importance is here, that the flash video conversion is
much faster than our previous Real video conversion. Our
initial recording format here is still MPEG-2 because of
the fact that this video format is of good quality and can
be converted into many different video/audio formats in
the post processing process.
Figure 5 (right) depicts the revised and newly imple-
mented Flex 2 based web interface. Besides the objective
of using it on any computer platform without
adjustments, the aim was that people without a technical
background could use the interface as easily as internet
experts. On the right hand side of Figure 5 one can find
an area where users can choose from a list of recorded lec-
tures or search text in the recordings. Figure 6 shows this
lecture list (section a) and search results (section b) in a
more detailed view. The lecture list gets updated over an
RSS notify mechanism. Inspirational were our positive
experiences with Apple’s iTunes, their popular Music
Store and the podcast subscriber facility (Ketterl et al.,
2006a, b). The main reason why we do not use Apple’s
iTunes (or other podcatcher software) and the podcast
technology as main distribution facility is, that the navi-

gation possibilities in podcasts are limited compared to
the navigation options in the virtPresenter system.
Further inquiries about navigation in lecture podcasts
and how lecture podcasts are being used in contrast to the
normal lecture recordings are ongoing. Several examina-
tion results with student users and external users are
described in (Schulze et al., 2007) for virtPresenter and
(Hürst and Welte, 2007a) for a system used at the
University of Freiburg. In the revised virtPresenter sys-
tem, users can subscribe to lecture recordings using our
internal university learn management system Stud. IP
(www.studip.de). The virtPresenter interface gets updated
and shows the lecture recordings as soon as they are avail-
able. Aside from that, external users can subscribe to the
recordings (like subscribing to a normal podcast with a
podcatcher software like Apple’s iTunes) and can view
recordings for example that are open for public viewing.
This lecture recording offer is presented over a public
website. In short, this means that students as well as
external viewers use the same interface for different
recordings. They do not need to switch between applica-
tions and there is no need to follow additional links in
other browser windows. The interface can also be used if
a link from our lecture website or the LMS points to a
specific lecture or a specific time index in a recording.
This is done by interpreting assigned url parameters. The
feature is a further extension of a functionality imple-
mented for the SVG based version and described in fur-
ther detail in (Mertens et al., 2005).
Section b in Figure 6 also depicts a possibility to search

in the recordings. Users can search not only in one web
lecture but in all recordings they have subscribed to. The
search results are presented in a hierarchical tree overview
similar to Adobe’s Acrobat. The results can be selected
and are linked directly to the corresponding lecture
recording section.
Due to the changeover to Flex 2 technology, users can
navigate fluently in the recordings with a new time
scrubber component (see Figure 7). In the SVG based
version, visible scrolling in the sense of (Hürst and
Müller, 1999) was only possible with the slides used in
the recording, in the Flex 2 based version, it is possible
for both slides and video. Presently we highlight slide
borders in the timeline and show the lecture slide title
directly above the respective area of the timeline. Colour-
coded are the sections which have been viewed already by
the user. When a lecturer is using the mouse cursor dur-
ing the presentation, this data is also logged with the
underlying recording system and the data can be pre-
sented in the user interface as well.
The Flex based interface responds considerably quicker
than the old one (see Table 1). Delays resulting from slide
loading, jumps to other sections or disturbing video
buffering that we had in the old Real video respectively
SVG based version are not noticeable anymore. Even a
complete reload of the system due to a browser refresh is
quick. The interface was tested on Windows, Linux or
Mac OS X computer platforms, all with the Flash 9 player
plugin. The results described were alike on all platforms.
5. VIRTPRESENTER 2.0: HOW FLASH 9

MADE THINGS FAST
For the new implementation of the lecture recording
system we used Adobe’s Flex 2 technology (this technol-
ogy was introduced in June 06) for the user interface and
for user interaction. Flex 2 is based entirely on
ActionScript 3, which was introduced as a revised and
extended programming language as part of Adobe’s new
Flash 9 player. Flex applications are deployed as com-
piled byte code that is executed within the Flash player
runtime. The core of Flex is the developer-centric
Flex framework, a library of ActionScript 3 objects that
188 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
provide a foundation for building rich internet applica-
tions. Writing applications with Flex is similar to devel-
oping in .NET or Java (Kazoun and Lott, 2007). Also,
Flex provides a wealth of useful components so that
developers do not have to build everything from scratch.
Important besides the comfortable developer framework
in our scenario is that neither a special browser version
nor a combination of different plug-ins has to be
installed on the users’ computers (like needed in the
SVG based implementation). The user only needs the
Flash player plug-in for viewing the web lecture record-
ings. The current plug-in version is Flash 9, which is
available for browsers on Windows (IE, Firefox and
Opera), Apple (Safari, Firefox) and Linux (Firefox) as
well. Normally this plug-in can be installed without dif-
ficulties or special computer knowledge. Besides, this
software component is very popular and widespread

nowadays (Téllez, 2007). That means that no special
browser adjustments or compatibility checks are
required. The same version will work on different com-
puter platforms as a cross browser solution. The plug-in
base for ActionScript 3 is a newly implemented virtual
machine called ActionScript Virtual Machine 2 (AVM2)
that converts byte code into native machine code. It is
more like a Java Virtual Machine (Java VM) or the .NET
Common Language Runtime (CLR) than a browser
script engine. The most important advantage is (and this
is a main reason why we are using Flash 9) that the new
browser environment is faster than previous versions and
it uses much less memory on the computer (Adobe,
2007). We could confirm this assertion in our daily work
with the new Flex 2 framework. Student users report
that they like how fast the new interface responds and
reacts to user interaction. Further user acceptance/prob-
lem surveys are planned for February 2008.
In order to respond fast, a further component is impor-
tant. Like mentioned before, a main problem was the
video buffering of the Real player in the interface. A ded-
icated and reliable video server is also required. Like most
universities we have a fairly good server infrastructure
backend. Through that we could use Adobe’s recom-
mended and expensive Flash Media Server 2 for working
with recorded lecture videos. Instead of this expensive
solution we a have used an open source Flash streaming
server implemented in Java for a couple of months now
which is called Red5 (Red5 2007). The adoption was an
experiment, because this open source server deployment

was not really stress tested, barely documented and only
available in version state 0.6 (currently version 0.6.3 is
available). The server worked very stable even during the
critical exam time at the end of the term.
Our productive streaming system used during that
time was a 2,8 Ghz Intel Dual Core Xenon processor
based Windows XP system with 4 GByte RAM. This
video server system is more than adequate with sufficient
reserves in case of user request peaks. At present there is
no need to use Adobe’s expensive Flash Media Server 2
solution in our production environment.
Figure 7 Timeline with slide border visualization and
slide title overview
Figure 5 VirtPresenter 2 Flex technology based interface
Figure 6 RSS updated lecture overview
with lecture search
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
VOL 4 NO 4 NOVEMBER 2007 189
6. BEHIND THE SCENES: ADMINISTRA-
TION AND WORKFLOWS
Lecture recording with virtPresenter makes use of a fully
automated recording and an extended production chain
described in (Ketterl et al., 2007a). While this process is
fully automated, a number of administration tasks still
remained. Currently we manage and generate eighteen
web lecture recordings with additional podcasts (Ketterl
et al., 2006a) from different university courses in differ-
ent rooms a week plus some additional recordings for
special occasions like conferences and workshops with
this system. This number increases steadily. The lecture

recording system is tightly connected to the learn man-
agement system Stud.IP used at the University of
Osnabrück. We have also defined more general interfaces
that make metadata like the name of the course, the name
of the lecturer and data for full text search available to
other systems like content portals or search engines.
These interfaces also allow for authentication handling
by the other system. Thus users do not have to log in
separately in the lecture viewer since they are authenti-
cated externally, e.g. by the portal.
Normally the recordings are assigned to the web-page
of the course in the university LMS. Figure 8 shows what
this integration looks like in our university LMS Stud.IP.
The recordings can additionally be tagged with further
meta-data or can be stored in other database systems
wherefrom further platforms can use them as well. At pres-
ent we are working on a rights management system for the
recordings that will serve the purpose of defining whether
episodes are available for university members, publically
(distribution over Apple’s iTunes music store (Ketterl
et al., 2006a) for example), as part of a course exchange pro-
gramme with other universities or on a pay per view basis.
A recurring administration task at the end of a study
term is to bring the web lecture recordings offline on a
computer DVD or a CD for data backup purposes, or for
students respectively lecturers whishing to watch the lec-
ture recordings offline. The normal approach in our pro-
duction system was to copy the recorded video, the lec-
ture slides and the complete source code for the web
interface on that offline medium. In addition to the fact

that it is not very convenient for users to start the record-
ings by clicking a specific file link in the DVD file sys-
tem we had the drawback that the complete (maybe
copyrighted) material is on that offline medium as well.
Over the internet, we had at least user authentication to
protect the content. A more attractive and promising way
to reduce administration effort and to keep the content
protected is to use Adobe’s new integrated runtime envi-
ronment called AIR (prior development name Apollo).
AIR stands for Adobe Integrated Runtime.
The environment is a new cross-platform desktop run-
time that allows web developers to use web technologies
to build and deploy Rich Internet Applications and web
applications to the desktop (Chambers et al., 2007).
During the last years, there has been an accelerating
trend of applications moving from the desktop to the
web browser. With the maturation of the Flash Player
runtime and Ajax type functionality it became possible
for developers to offer richer application experiences
without disturbing page refreshes. This means that the
Flex implementation of the web lecture system can be
installed offline on a Windows PC or on a Macintosh sys-
tem (a Linux version is promised by Adobe to appear by
the end of 2007) and it will behave like any other appli-
cation on the system. On Windows, for example, the
virtPresenter web lectures appear now offline in the start
menu and in the windows taskbar. As a drawback, users
have to install the AIR runtime on their system.
The adoption of this technology in general is still in
question. Why should users prefer a web like application

on their normal desktop computers? Unlike this
approach there are other projects and ideas that focus
on the web as an operating system (Vahdat et al., 1996)
or new alternative technologies as described in the next
section.
In the literature one can find further examples for
using RIAs on the desktop or ideas for adopting this
technology (Chambers et al., 2007). In our lecture record-
ing production environment, AIR solves some of the
offline related problems. We can offer virtPresenter
recorded AIR versions for standard download in case of a
Red5 streaming server breakdown. Another prospect is
that users do not need to be online while watching the
lecture recordings since the AIR application could
include all required files. The offline application gets
updated through a new interpretation of the associated
RSS files whenever the computer is online and new data
(new lecture recordings) can be transferred and updated
in the offline version.
For a simple lecture recording data backup mentioned
in the beginning of this chapter, AIR is not an option,
due to the fact that the content is encapsulated in the
AIR application and it is problematic to disassemble it.
7. CONCLUSION AND FUTURE
RESEARCH
During the last few years, Flash has evolved into an ideal
content format for web lectures. Especially the fact that
both slides and video can be replayed with one single
browser plug-in makes web lecture interfaces built upon
this technology easy to use for almost anyone. This paper

has demonstrated the feasibility of a Flex 2 based user
interface for web lectures and it has shown that this tech-
nology can be used to improve usability and ease the
administrative workload.
190 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
With AIR it is even possible to protect content in
offline versions of a web lecture. Given the fact that AIR
and AIR- or Flash-like approaches (Silverlight (Cohen,
2007), the JavaFX family, or Google Gears) are rumoured
to be supported by a number of mobile devices in the
near future, AIR could also open more perspectives for
interactive presentation of web lectures on mobile
devices. If AIR on mobile devices worked just like con-
ventional AIR applications, it would be possible to pro-
duce learning content that can be used for normal web-
sites and for m-learning modules at he same time, that is
without expensive device adjustments. Our lecture pod-
casts (audio, video and enhanced podcast versions)
(Ketterl et al., 2006a, b) were a step forward to support
mobile users with fine granulated lecture recordings.
In combination with additional mobile self assess-
ments as developed for the system presented here (Ketterl
et al., 2007b) and other systems (Hürst et al. 2007a)
learning on the go becomes possible. The podcast tech-
nology has a drawback at present for mobile learners.
Mobile users cannot give feedback to the lecturer for
example due to technical limitations of devices and of the
podcast technology. With full AIR support on mobile
devices, it is likely that these problems could be solved

easily as one AIR application could run on different plat-
forms (mobile, internet and desktop).
Another branch we are pursuing in the Flex based ver-
sion of the interface is implementing social navigation
functionalities that had previously been tested in the
SVG based version of the interface (Mertens et al.,
2006a). Flex 2 does, however, open new perspectives for
social navigation in lecture recordings. The reduced load-
ing times allow for editing and rearranging content on
the client side without having to change its server side
representation. It is also easier to embed the player in
other web sites. To prove this, some of our lecture record-
ings and the newly implemented Flex 2 based
virtPresenter interface have been integrated as an
application in the social community Facebook. An issue
that does still remain to be solved is how navigation can
be facilitated in re-arranged and re-structured content.
REFERENCES
Adobe Systems Incorporated (2007), “Flex2 technical overview:
technical whitepaper”, available at: www.adobe.com/products/
flex/whitepapers/pdfs/flex2wp_technicaloverview.pdf
(accessed December 2007).
Bieber, M. (2000), in Ralston, A., Reilly, E. and Hemmendinger,
D. (Eds), Hypertext. Encyclopaedia of Computer Science, 4th ed.,
Nature Publishing Group, pp. 799-805.
Chambers, M., Dixon, R. and Swartz, J. (2007), Apollo for
Adobe Flex Developers Pocket Guide, Adobe Developer
Library, O’Reilly, Media Inc., Sebastopol, CA.
Chambers, M., Dura, D. and Hoyt, K. (2007), Adobe Integrated
Runtime (AIR) for JavaScript Developers, Adobe Developer

Library, O’Reilly, Media Inc., Sebastopol, CA.
Cohen, B. (2007), “Silverlight technical articles. Silverlight
architecture overview: technical whitepaper”, Microsoft
Corporation, April, available at: http://msdn2. microsoft.com/
en-us/library/bb428859.aspx (accessed December 2007).
Hoppe, U., Klostermeier, F., Boll, S., Mertens, R. and Kleinefeld,
N. (2007), “Wirtschaftlichkeit von Geschäftsmodellen für
universitäre Lehrkooperationen – eine Fallstudie”, Zeitschrift
für E-Learning, Vol. 3, No. 2, pp. 29-40.
Hürst, W. and Müller, R. (1999), “A synchronization model
for recorded presentations and its relevance for information
retrieval”, 7th ACM International Conference on Multimedia,
Orlando, Florida, pp. 333-42.
Hürst, W. and Welte, M. (2007), “An evaluation of the mobile
usage of e-lectures podcasts”, Proceedings of the Mobility
Conference on Mobile Technology, Applications and Systems,
Singapore, September 2007.
Hürst, W., Jung, S. and Welte, M. (2007), “Effective learn-
quiz generation for handheld devices”, Proceedings of the 9th
Conference on Human Computer Interaction with Mobile Devices
and Services (MobileHCI 2007), Singapore.
Kazoun, C. and Lott, J. (2007), Programming Flex 2, Adobe
Developer Library, O’Reilly, Media Inc., Sebastopol, CA.
Figure 8 Lecture Recordings in the Learn Management System
Ketterl, Mertens and Vornberger: Vector graphics for web lectures
VOL 4 NO 4 NOVEMBER 2007 191
Ketterl, M., Mertens, R. and Morisse, K. (2006a), “Alternative
content distribution channels for mobile devices”,
Microlearning International Conference on Micromedia
& eLearning 2.0: Getting the Big Picture, Innsbruck, 8-9 June

2006, pp. 119-30.
Ketterl, M., Mertens, R. and Vornberger, O. (2007a), “Vector
graphics for web lectures: comparing Adobe Flash 9
and SVG”, Workshop on Multimedia Technologies for
E-Learning (MTEL), IEEE International Symposium on
Multimedia 2007, Taichung, Taiwan, 10-12 December 2007,
pp. 389-95.
Ketterl, M., Heinrich, T., Mertens, R. and Morisse, K. (2007b),
“Enhanced content utilisation: combined re-use of multi-type
e-learning content on mobile devices”, IEEE Multidisciplinary
Engineering Education Magazine, Vol. 2 No. 2, pp. 61-4.
Ketterl, M., Mertens, R., Morisse, K. and Vornberger, O.
(2006b), “Studying with mobile devices: workflow and
tools for automatic content distribution”, World Conference on
Educational Multimedia. Hypermedia & Telecommunications
EDMedia 2006, Orlando, FL, June 2006, pp. 2082-8.
Lauer, T. and Ottmann, T. (2002), “Means and methods in auto-
matic courseware production: experience and technical chal-
lenges”, Proceedings of the World Conference on E-Learning in Corp.,
Govt., Health. & Higher Education, E-Learn 2002, Montreal,
Quebec, Canada, 15-19 October 2002, pp. 553-60.
Mertens, R. (2007), “Hypermediale Navigation in
Vorlesungsaufzeichnungen: Nutzung und automatische
Produktion hypermedial navigierbarer Aufzeichnungen
von Lehrveranstaltungen”, PhD thesis, Universität
Osnabrück, Osnabrück.
Mertens, R., Farzan, R. and Brusilovsky, P. (2006a), “Social
navigation in web lectures”, ACM Hypertext 2006, Odense,
Denmark, 23-25 August 2006, pp. 41-4.
Mertens, R., Friedland, G. and Krüger, M. (2006b), “To see or

not to see? Layout constraints, the split attention problem
and their implications for the design of web lecture inter-
faces”, World Conference on E-Learning, in Corporate,
Government, Healthcare & Higher Education, E-Learn 2006,
Honolulu, HI, 13-17 October 2006, pp. 2937-43.
Mertens, R., Ketterl, M. and Vornberger, O. (2006c),
“Interactive content overviews for lecture recordings”,
IEEE ISM 2006 Workshop on Multimedia Technologies for
E-Learning (MTEL), San Diego, California, USA, 11-13
December 2006, pp. 933-7.
Mertens, R., Ketterl, M. and Vornberger, O, (2007),
“The virtPresenter lecture recording system: automated
production of web lectures with interactive content
overviews”, International Journal of Interactive Technology and
Smart Education (ITSE), Vol. 4 No. 1, pp. 55-66.
Mertens, R., Brusilovsky, P., Ishchenko, S. and Vornberger, O.
(2006d), “Time and structure based navigation in web lec-
tures: bridging a dual media gap”, World Conference on
E-Learning, in Corporate, Government, Healthcare & Higher
Education, E-Learn 2006, Honolulu, HI, USA, 13-17 October
2006, pp. 2929-36.
Mertens, R., Ickerott, I., Witte, Th. and Vornberger, O.
(2005), “Entwicklung einer virtuellen Lernumgebung für
eine Großveranstaltung im Grundstudium”, Proceedings of
the Workshop on elearning 2005, HTWK Leipzig, 11-12 July
2005, pp. 197-210.
Mertens, R. Schneider, H., Müller, O. and Vornberger, O.
(2004), “Hypermedia navigation concepts for lecture
recordings”, World Conference on E-Learning in Corporate,
Government, Healthcare & Higher Education, E-Learn 2004,

Washington DC, November 2004, pp. 2480-7.
Red5 Open Source Streaming Server (2007), available at:
(accessed December 2007).
Schulze, L., Ketterl, M., Gruber, C. and Hamborg, K.C.
(2007), “Gibt es mobiles Lernen mit Podcasts? Wie
Vorlesungsaufzeichnungen genutzt warden”, 5.e-Learning
Fachtagung Informatik (DeLFI), Siegen, Germany, September
2007, pp. 233-44.
Téllez, A.G. (2007), “Authoring multimedia learning material
using open standards and free software”, IEEE International
Symposium on Multimedia 2007 Workshop on Multimedia
Technologies for E-Learning (MTEL), Taichung, Taiwan,
10-12 December 2007, pp. 383-9.
Vahdat, A., Dahlin, M. and Anderson, T. (1996), “Turning the
web into a computer”, Technical report, University of
California, Berkeley, CA.
Ziewer, P. and Seidl, H. (2002), “Transparent teleteaching”,
19th Annual Conference of the Australasian Society for
Computers in Learning in Tertiary Education (ASCILITE),
Auckland, New Zealand, December 2002, Vol. 2, pp.749-58.
Authoring multimedia learning material
using open standards and free software
Alberto González Téllez
Departamento de Informática de Sistemas y Computadores, Valencia, Spain
Email:
Abstract
Purpose – The purpose of this paper is to describe the case of synchronized multimedia presentations.
Design/methodology/approach – The proposal is based on SMIL as composition language. Particularly, the paper
reuses and customizes the SMIL template used by INRIA on their technical presentations. It also proposes a set of free
tools to produce presentation content and design focusing on RealPlayer as delivery client. The integration in this

e-learning platform of multimedia compositions developed following the proposed technique is also presented.
Findings – Technological support to learning and teaching has become widespread due to computers and internet
ubiquity. Particularly e-learning platforms permit the any-time-and-any-place distribution of interactive multimedia
learning materials. There are commercial tools available to author this kind of content, usually based on proprietary
formats. This option has some drawbacks like license cost and software company dependency. To use open data
standards and free software is an alternative without these inconveniences but available authoring tools are common-
ly less productive. This shortcoming is certainly important to non-technical authors and it could be solved by open
source collaboration.
Originality/value – The paper presents multimedia learning material using open standards and free software.
Keywords: Multimedia, E-learning, Teaching aids, Computer software
Paper type: Research paper
Interactive Technology and Smart Education (2007) 192–199
© Emerald Group Publishing Limited
1. INTRODUCTION
Digital format learning/teaching materials are commonly
used in universities due to classroom computer avail-
ability and to the added capabilities that computer based
delivery offers compared to classic blackboard only
method. In fact the classroom has been extended by
ubiquitous e-learning platforms (Sakai, Moodle, WebCT,
etc.) that impose the use of digital format to learning
content. Computer authoring tools permit to create
dynamic presentations with animation effects, audio
and video clips to make knowledge transference more
effective. For instance the more commonly used presen-
tation editor PowerPoint is able to produce narrative
presentations adding a speaker voice track to slides. A
step forward is done by tools like eChalk (Jeschke, 2006)
that allows the recording of all live activity on a pen
based input device or electronic whiteboard, including

the lecturer voice, and the delivery of recorded content to
Java aware web clients.
Common presentations authored with office suites
tools are intended to be used locally in the computer
were they are stored. Web format is supported as an
export option but usually the format obtained is not well
192 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
VOL 4 NO 4 NOVEMBER 2007 193
suited for the Internet (i.e. slides converted to bitmaps,
lack of streaming support, etc.). HTML extended with
Flash and JavaScript is more suitable for web delivery and
this is the nowadays general choice for web content
authors. The main reason is that very good commercial
authoring tools are available and almost all clients and
platforms support this formats. In this context Flash is
the part that adds multimedia support and compared to
HTML and JavaScript it is a proprietary format. In spite
of being the de facto standard for multimedia on the web
(i.e. YouTube is based on Flash) it has the shortcomings
of forcing authors to be linked to Adobe and to its com-
mercial decisions. Portability is not a problem because
the Flash plug-in is available on Windows, Linux and
MacOSX.
Open format alternatives to Flash are SVG and SMIL,
two XML compliant languages standardized by the
World Wide Web consortium (W3C). XML is a W3C
effort to enforce the definition and implantation of open
and application independent data formats. SVG stands
for Scalable Vector Graphics and is defined to design

static and animated vector graphics. SMIL (Synchronized
Multimedia Integration Language) (W3C SMIL site)
permits to combine and synchronize several independent
media in a presentation. The presence of SVG and SMIL
on the web is nowadays clearly surpassed by Flash but
successful open source initiatives such as Firefox and
Helix can change the scenario in the future.
There are two other alternatives to make the web mul-
timedia capable: ActiveX controls and Java applets.
ActiveX is a Windows-only technology and it is success-
ful due to the actual domain of Windows clients on the
Internet. Java applets are supported in all Java aware
platforms and are very convenient to implement small
web compatible interactive applications. We make use of
applets to enrich our learning documents with interactive
simulators (González, 2003) and, as we will see later, to
include multimedia compositions into our e-learning
platform. The availability of the Java plug-in for all com-
mon web clients and operating systems makes this tech-
nology a good development platform for e-learning envi-
ronments (Jeschke, 2006).
In this work we propose a technique to develop multi-
media contents for the Internet based on open standards,
particularly SMIL that has been used since many years in
the context of lecture recording (INRIA site, Yang et al.,
2001, Ma et al., 2003, Joukov, 2003, Hunter, 2001). Our
proposal is comparable to the one appearing in (Yang
et al., 2001) but it is simpler and more concrete in the
sense that all the tools and procedures required are pre-
sented and available. It includes a set of free and in most

cases open source authoring tools. A main goal in the
proposal is multiplatform support (particularly consider-
ing Windows, Linux and MacOSX) on the delivery and
production processes.
2. CONTENT PRODUCTION SCHEME
We have been working in recent years on the utilization
of open and XML compliant formats to produce teaching
content. As a result we have developed an authoring envi-
ronment to produce and to manage content based on
Docbook (González, 2006, González, 2007). Until now
content media was limited to text and static graphics
focusing on paper format delivery.
In the academic year 2006-2007 our university started
PoliformaT an e-learning platform based on Sakai
(Mengod, 2006). This has opened some working direc-
tions to us. One of them is based on the fact that Sakai
has chosen the IMS formats for learning content (IMS
site), particularly IMS content package and IMS QTI lan-
guages. Our previous decision about XML has been wise
because Docbook content can be automatically translated
to IMS format by means of XSLT. This is quite feasible
because Docbook is well structured and format inde-
pendent. A second working line is based on the possibil-
ity of delivering more dynamic content (multimedia
compositions) containing animations, video, audio and
user interaction.
Our learning content anatomy has text as backbone,
we are classic in this respect. Text combined with static
graphics is delivered in paper format (PDF) and in web
format (HTML). Web format is extended by means of

multimedia compositions based on SMIL, this extension
is the topic of this work. Our multimedia compositions
are classified following the following increasing structur-
al complexity sequence:
1. Static image with voice narration.
2. Computer animation or natural video with synchro-
nized voice narration.
3. Multiple media synchronized with voice narration
or lecturer video.
2.1 Media Formats and Media Player
An important decision to make when dealing with mul-
timedia on the Internet is to select the target client. Web
clients only support directly HTML, JavaScript and
bitmap graphics (JPEG, PGN and GIF). Other content
like vector graphics, audio and video require specific
plug-ins (Rogge, 2004) and then specific formats. The
authoring of this kind of content is then strongly condi-
tioned by the target client. Some of the most common
multimedia clients are:
• Windows Media Player (Microsoft).
• Quick Time (Apple).
• RealPlayer (Realnetworks, Helix).
194 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
• Flash (Adobe).
• Mplayer (open source).
• VLC (open source).
Excluding Windows Media Player, only available on
Windows, all players are multiplatform. Flash is with no
doubt the one that wins in terms of amount of content

published on the web. RealPlayer has been shadowed by
Windows Media Player but it is still alive (release 11.0
has been delivered on November 2007) and it has also the
interesting feature of having a linked open source initia-
tive named Helix (Helix site). Helix was started by
Realnetworks and includes several open source projects
including several players, the Helix server and streaming
formats. RealPlayer supports SMIL 2.0, briefly described
on the next section, that allows composition structures
and user interaction capabilities that surpass the ones
offered by proprietary formats (Pihkala, 2006). In
(Bulterman, 2003) SMIL is proposed to encode peer-level
annotations that allow dynamic expansion of multimedia
presentations. The counterpart is that SMIL players use
to have limitations on the features supported and even
errors (Eidenberg, 2003). And last but not least there is
no media content standardization for SMIL.
After balancing pros and cons we have chosen SMIL as
the language to create our multimedia learning/teaching
material. The purpose of SMIL is to define the spatial and
temporal integration of several media in a multimedia
composition and to establish the user interaction with
the composition. Previous considerations indicate that it
is advisable to choose a target client among the available
SMIL aware clients. This will define precisely the media
formats to use and the SMIL specification portion that is
properly supported and then reliable. RealPlayer is our
choice because it is available on Windows, Linux and
MacOSX and it supports an extensive subset of the SMIL
2.0 specification. In spite of being a proprietary player it

has the interesting feature mentioned previously of being
related to the open source project Helix. RealPlayer sup-
port, among others, the following formats:
• Text: Plain text and Realtext.
• Images: JPEG.
• Audio: Realaudio.
• Animations: Realvideo.
• Natural video: Realvideo.
Realaudio and realvideo are specially designed for
streaming delivery and are the more convenient audio
and video formats to get good synchronization results in
SMIL constructs played by RealPlayer.
3. SMIL
SMIL (Synchronized Multimedia Integration Language,
W3C SMIL site) is the XML W3C standard intended to
define the synchronized integration of text, graphics,
audio and video in multimedia presentations. SMIL per-
mits to define the spatial and temporal composition of
several media and the interaction between the media
inside the presentation and with the presentation and the
user. Because of being XML compliant only a plain text
editor is required to create SMIL documents by hand and
it is also straight forward to generate them automatically.
SMIL document structure is similar to HTML; there is
a root element <smil> with two children elements
<head> and <body>. The <head> element is the docu-
ment header and contains several kinds of metadata ele-
ments. The most important one is <layout> that defines
the spatial regions on the presentation as shown for
instance in Figure 1. The element defines the features

(background color, size, etc.) of the main presentation
panel. Element also includes the definition of spatial
regions that will contain the presentation media. Every
region is defined by a <region> element that sets its loca-
tion and size, if also assigns a unique identifier (id attrib-
ute) to the region in order to be able to make references
to it from the content part of the document. The header
section can also include descriptive metadata in <meta>
elements that will permit the document inclusion in a
automatically managed content repository.
After the header we have the <body> element that
includes the references to the media shown in the presenta-
tion and their spatial and temporal locations. Every media is
included by means of a media element like: <text>, <img>,
<audio>, and <vidio>, using the attribute src that specifies
the location path of the media file. Spatial location is
defined by means of the region attribute that is set to a region
identifier defined by the id attribute in <region> elements.
Time behavior is defined by means of a nested compo-
sition of <seq> and <par> elements that define sequen-
tial and parallel playing, respectively. Inside a <seq> or
<par> element we can have <switch> elements intended
to select from a media collection the ones that comply
with some conditions (i.e. presentation language). Every
media element has attributes that establish its timing
behavior: begin for the start time, end for the end instant
and dur for the media playing duration.
SMIL also has links implemented with <a> element
that allow user interaction with the presentation. Links
can point to content locations (as in HTML) and to tem-

poral locations. Temporal links destinations are defined
by means of <area> elements included in temporal media
element (i.e. locations in a video clip). In Section 4.2 we
describe how this is performed in our compositions.
3.1 INRIA’s SMIL Template
The French research institution INRIA (Institute
National de Recherche en Informatique et en
Téllez: Authoring multimedia learning material using open standards and free software
VOL 4 NO 4 NOVEMBER 2007 195
Automatique) has chosen SMIL as the format for their
technical presentations (INRIA site). Our interest in
SMIL began after noticing how well these presentations
are reproduced with RealPlayer in Windows, Linux and
MacOSX. It has been possible to analyze these presenta-
tions and to reuse their template because SMIL is an open
format and INRIA has not defined privacy restrictions.
INRIA presentations are about one hour long and they
are made using two designs; the first one has a root presen-
tation that links to partial presentations that are several
minutes long. The second design includes the whole pres-
entation in one SMIL document. Our presentations are con-
ceived as small pieces in a text backboned lecture and then
their length will be up to 5 or 10 minutes, then the second
template is more adequate. The spatial design of the SMIL
template selected defines the regions shown in Figure 2:
• Title: It includes presentation titles.
• Slides: It shows presentation slides (i.e. JPEG
images): It can include sub regions to show different
types of media inside a slide.
• Temporal link index: It includes temporal links to

presentation time locations.
• Lecturer: It shows a narrative lecturer video.
• Logo: It includes the institution logo.
The presentation timing structure has a root <par>
element that contains the slides sequence, the menu
links and the narrative lecturer video. The sequence of
slides is made putting the elements (<img> or <video>)
corresponding to every slide inside a <seq> element. If a
slide is made up of different media then it corresponds
to a <par> element that includes all the media. The tim-
ing control of the slides sequence is implemented by
means of the dur attribute inside every slide element.
The narrative lecturer video covers the whole presenta-
tion and it is encoded in Realvideo. The menu links
point to <area> locations in the lecturer video that are
also synchronized with the timing defined in the slide
sequence. A more detailed analysis can be performed by
looking at the SMIL source of a presentation. This can be
done by clicking at the clip source entry in the floating
menu when a presentation is played with RealPlayer.
We have found INRIA technical presentations a good
example of the SMIL capability to create multimedia
presentations. Our customization of their design tem-
plate, in order to elaborate our multimedia material, is
described in Section 4.1.
4. AUTHORING TOOLS
After establishing the technology to use we have to select
a set of good enough authoring tools to produce the
media we are going to include in our multimedia presen-
tations. We have preference for free, open source and

multiplatform tools. In Table 1 we propose a set of free
tools available on Windows and Linux. Real Producer
Basic is a free product from Realnetworks that permits to
capture and to convert audio and video to Real formats.
The converted streams can not be edited inside Real
Producer Basic. Therefore the media edition, if required,
should be performed before conversion.
Impress is the OpenOffice presentation editor and we
have found it good enough to produce teaching content.
CamStudio and xvidcap are screen video recorders both of
them open source. They allow producing demos or ani-
mations by recording screen videos (i.e. Impress anima-
tions). Finally JEdit is an open source text editor written
in Java with several extensions. One that is particularly
Figure 2 INRIA technical presentation
Table 1 Authoring tools
Tool type Windows Linux
Audio capture Real Producer Basic Real Producer Basic
Video capture Real Producer Basic Real Producer Basic
Screen video cap CamStudio xvidcap
Animations Impress Impress
SMIL editor JEdit JEdit
Figure 1 Layout section example
196 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
relevant here is XML extension that is very adequate to
edit SMIL documents.
4.1 SMIL Template Customization
The multimedia compositions that we are interested
on are mentioned in Section 2. It is very straight forward

to reuse INRIA SMIL template to create these types of
multimedia compositions. To produce a type A composi-
tion we only have to make the following changes in the
SMIL template:
• Delete the link menu.
• Replace the <video> narration by an <audio>
element.
• Delete the <area> elements in the narration element.
• Reduce the slide sequence to only one <img>
element.
A type A composition is converted into a type B compo-
sition by replacing the <img> element in the slice
sequence by a <video> element. To produce a type C
composition we only have to define the slide sequence
and the synchronization between the link menu, the nar-
rative video or audio and the slide sequence. A detailed
explanation is given in Section 4.2. An example of a type
C composition is shown in Figure 3. The lecturer video
has been replaced by an audio track and a GIF animation
in order to reduce the amount of storage or network
bandwidth required.
4.2 Production Process
The hardware equipment required is very accessible and
it is compounded by a PC, speakers, a microphone and a
digital video camera. The production process has two
main steps:
1. Content creation (slides, audio clips, video clips, etc.)
2. Content integration using SMIL.
A common content is a slide sequence created by a
presentation editor like PowerPoint or Impress.

PowerPoint allows exporting a presentation in JPEG or
PNG format in such a way that every slide is exported as
a JPEG or PNG file. Impress has the HTML exportation
option that also exports every slide as a JPEG file.
If a slide includes animation the previous technique is
not adequate. An animated slide can be captured using
one of the screen capture utilities proposed in Section 4.
The capture process will generate a video clip that will be
converted later into realvideo format in order to get a
good result on Realplayer. This conversion is performed
by means of Real Producer Basic that supports several
video formats as input, like uncompressed AVI and DV.
Natural video obtained with a video camera (webcam,
camcorder, etc.) can also be included by performing the
same conversion as in screen capture clips. We have found
that a target bandwidth between 256 and 512 kbps for
realvideo gives satisfactory results for both screen records
and natural video.
The presentation narration is produced by recording an
independent audio or video clip for every presentation item.
This can be done by means of the capture capability of Real
Producer Basic that directly generates realaudio and
realvideo formats. The inconvenient is that the captured
clip can not be edited. If audio and video edition is needed
then it is required a capture utility (i.e. Nero 7) that gener-
ates a Real Producer Basic compatible format. When all the
individual clips are available in realaudio or realvideo for-
mats they are glued into a single narration by means of
rmeditor console utility included in Real Producer Basic.
After having obtained all the presentation content

items and the presentation narration, the next step is to
customize the SMIL template (i.e. using JEdit). The
customization process has two dimensions:
1. Spatial. Definition of the presentation layout (slide
region, link region, title region, etc.).
2. Temporal. Definition of temporal behavior and syn-
chronization (slide durations and time link locations).
Temporal design is the most complex and it is per-
formed in three steps:
1. Get the duration of every individual narration clip
(t
slide,dur
). This is indicated by Realplayer when play-
ing the clip.
2. Obtain the sequence of slide starting times (t
slide,start
).
This can be computed from t
slide,dur
values using a
spreadsheet.
3. Design the temporal link index by grouping slides
and getting the location of every anchor in the pres-
entation timeline from t
link,start
values.
Figure 3 Customization example
Téllez: Authoring multimedia learning material using open standards and free software
VOL 4 NO 4 NOVEMBER 2007 197
The values obtained in the temporal design are includ-

ed in the SMIL file in the following way:
• The dur attribute of the elements corresponding to
the slide sequence (<img> for static slides, for
dynamic slides) is set to t
slide,dur
.
• The begin attribute of the <area> elements inside the
<audio> or <video> element that corresponds to the
presentation narration, is set to t
link,start
.
Finally, the attribute href of the <a> elements correspon-
ding to the linking menu is set to the values of the id attrib-
ute of the associated <area> elements defined in the
narration. An example of setting temporal links is shown in
Figure 4. A presentation of this paper can be found at
“SMIL presentation of this paper” reference. It can illustrate
the previous description by looking at the source code.
4.3 Automatic Generation of SMIL
Composition
The manual generation of SMIL compositions following
the procedure in the previous section is not very attrac-
tive. A good feature of SMIL is that its XML compliancy
allows easy automatic generation by means of standard
XML tools like XSLT. Once the author has generated the
media (slides, audio clips, title and table of content) an
XSLT stylesheet is used to produce the SMIL file without
any further user intervention.
The most difficult issue is temporal synchronization
but fortunately Realnetworks media formats can be

converted to text by means of the “rmeditor” utility, par-
ticularly using the “-d” option. The text file generated
includes the file temporal length in miliseconds. In order
to hide this process to the user a simple front end imple-
mented in Java gets the clips temporal length and
executes the XSLT stylesheet in order to generate the
complete SMIL composition. The front end also includes
layout customization of the four presentation regions:
slide, table of content, title and icon.
5. POLIFORMAT INTEGRATION
The integration in our LMS platform (PoliformaT) of the
multimedia material, developed with the proposed tech-
nique, has faced some problems. The main one is related
to the PoliformaT content delivering protocol that is
HTTPS and the lack of HTTPS support on RealPlayer. It
would be nice that our university administration become
convinced about the convenience of our technique and
decide to startup a Helix server coordinated with
PoliformaT, but in the meantime we looked for a
“PoliformaT alone” solution.
After some caveats and several tests we have developed
a solution based on storing multimedia clips compressed
in zip format in PoliformaT. A Java applet, located in the
learning content web page, performs a zip file download,
a local zip file decompression and finally launches
RealPlayer playing the local copy of the clip. This solu-
tion is feasible because our compositions will be about 5
minutes long giving a zip file size about 10 Mbytes. The
download time using a common broadband Internet con-
nection requires from 10 to 20 seconds. A drawback of

the solution is that some configuration has to be per-
formed on the client side. Particularly the applet needs
permission to read and write in a local folder and to con-
nect to Poliformat HTTPS port. Also it needs execution
permission over RealPlayer.
Applet permissions can be defined on a text file with
specific name, location and syntax (Liai et al., 1999). Every
user can have a configuration for applet permissions on a
file named “.java.policy” that is located on the user home
directory. Permissions are defined by including in the
permission file “grant” entries. To establish the required
permissions to our applet a grant entry is then required.
Supposing a user named “agonzale” and an applet working
folder “poliformat” located in the user home folder, the
grant entry in Windows Vista is as shown in Figure 5.
The grant is restricted to the content URL in
PoliformaT and four permissions are established:
1. To read user home folder path and (on MacOSX only)
file encoding property.
2. To read and to write in the working folder.
3. To execute a “realplay.exe”.
4. To connect to PoliformaT server on the HTTPS port.
The grant entry syntax depends on the operating system,
particularly on the file path syntax, and then it is slightly
different on Windows Vista, Windows XP, Linux and
MacOSX.
Figure 4 Defining temporal links
198 INTERACTIVE TECHNOLOGY AND SMART EDUCATION
Téllez: Authoring multimedia learning material using open standards and free software
The applet design is simple; it includes a “start” but-

ton, a progress bar that shows the zip download process
and a clip preview image. A set of parameters indicates to
every applet instance the zip file to download and the
SMIL file to play. Figure 6 shows a screen shot with the
applet embedded on a learning content HTML page
inside PoliformaT and RealPlayer launched pressing the
applet “Start” button.
The solution has been successfully tested on Internet
Explorer (32 bits version) on Windows XP and Windows
Vista, and Firefox on Windows XP, Windows Vista,
Linux (Ubuntu 7.10 and OpenSuSE 10.3) and MacOSX
10.4 (Tiger). Some improvements can be added to the
developed solution in order to make it more secure, for
instance the applet can be signed and a checksum can be
performed in order to assure that the program started by
the applet really is RealPlayer.
The permission configuration procedure is made auto-
matically by means of a Java application that generates
the “.java.policy” update offering a friendly user inter-
face. This application is implemented in Java because
Java applications are portable and they do not have local
access restrictions apart from the ones that apply to the
user. The application requires very little user interven-
tion, it asks only to the user for the path location of
RealPlayer in case of being unable to locate it on the
common paths used by the user operating system.
6. CONCLUSIONS AND FUTURE WORK
In this work we have presented a proposal to produce
multimedia compositions based on SMIL and
Realnetworks technology linked to the open source proj-

ect Helix. Particularly we have chosen RealPlayer as the
target multimedia client and realaudio and realvideo for-
mats to deliver audio and video media. The proposal
includes three types of multimedia compositions of
increasing complexity and their implementation by cus-
tomizing the SMIL template used by INRIA in its tech-
nical presentations.
A set of free, and most of them open source, authoring
tools is proposed on Windows and Linux. Therefore fol-
lowing the technique presented lecturers can author mul-
timedia presentations at a very low cost. The integration
in our Sakai based e-learning platform PoliformaT of our
multimedia compositions has been solved relying on Java
applet technology.
We have observed that students like multimedia mate-
rial as a complementary resource on presential classes and
Figure 5 Grant entry to allow required applet permissions
Figure 6 Realplayer launced from the applet located in PoliformaT
Téllez: Authoring multimedia learning material using open standards and free software
VOL 4 NO 4 NOVEMBER 2007 199
as self learning material it is much more helpful than
static and silent material. We have not performed yet a
quantitative evaluation of the impact of including multi-
media compositions in our teaching documents. This
impact will be measured by the level of student
participation, the lecturer evaluation polls and the
student’s academic results.
REFERENCES
Bulterman, D. (2003), “Using SMIL to encode interactive,
peer-level multimedia annotations”, Proceedings of the 2003

ACM Symposium on Document Engineering, Grenoble, France,
pp. 32-41.
Bulterman, D. and Rutledge, Ll. (2004), SMIL 2.0 Interactive
Multimedia for Web and Mobile Devices, Springer-Verlag,
Heidelberg.
Eidenberger, H. (2003), “‘SMIL and SVG in teaching: Internet
Imaging V”, Proceedings of the SPIE, Vol. l No. 5304,
pp. 69-80.
González, A. (2003), “Interactive applets for introductory
courses on computer architecture”, International Conference
on Engineering Education 2003, Valencia, Spain.
González, A. (2006), “Teaching document production and
management with Docbook”, II International Conference on
Web Information Systems and Technologies (WEBIST 2006),
Setúbal, Portugal.
González, A. (2007), “Authoring reusable slide presentations”,
III International Conference on Web Information Systems and
Technologies (WEBIST 2007), Barcelona.
Helix project, available at:
Hunter, J. and Little, S. (2001), “Building and indexing a
distributed multimedia presentation archive using SMIL”,
Proceedings of the 5th European Conference on Research and
Advanced Technology for Digital Libraries, pp. 415-28.
IMS Global Learning Consortium, available at: www.imsglob-
al.org/
INRIA technical presentations, available at: www. inria.fr/
multimedia/exposes
Jeschke, S., Knipping, L. and Pfeiffer, O. (2006), “The eChalk
system: potential of teaching with intelligent digital chalk-
boards”, Current Developments in Technology-Assisted Education

2006, m-ICTE2006, pp. 996-1000.
Joukov, N. and Chiueh, T. (2003), “Lectern II: a multimedia
lecture capturing and editing system”, Proceedings of 2003
International Conference on Multimedia and Expo. ICME ’03,
Vol. 2, pp. 681-4
Liai, Ch., Gong, L., Koved, L., Nadalin, A. and Shemers, R.
(1999), “User authentication and authorization in the Java
platform”, 15th IEEE Annual Computer Security Applications
Conference, pp. 285-90.
Ma, M., Shilings, V., Chen, T. and Meinel, Ch. (2003),
“T-Cube: a multimedia authoring system for eLearning”,
Proceedings of E-Learning 2003, 7-11 November, Phoenix,
Arizona, pp. 2289-96.
Mengod, R. (2006), “PoliformaT, the Sakai-based on-line cam-
pus for UPV – history of a success”, 5th Sakai Conference,
Vancouver, BC, Canada, 30 May-2 June 2006.
Pihkala, K. and Vuorimaa, P. (2006), “Nine methods to extend
SMIL for multimedia applications”, Multimedia Tools and
Applications, Vol. 28, pp. 51-67.
Rogge, B., Bekaert, J. and Van de Walle, R. (2004) “Timing
issues in multimedia formats: review of the principles and
comparison of existing formats”, IEEE Transactions on
Multimedia, Vol. 6 No. 6, pp. 910-24.
SMIL presentation of this paper, available at: www.disca.upv.
es/agt/mtel2007/mtel2007.ram
W3C SMIL standard, available at: www.w3.org/TR/2005/
REC-SMIL2-20050107/
Yang, Ch., Yang, Y. and Lin, K. (2001), “A SMIL-based lesson
recording system for distance learning”, Proceedings of 2001
Conference on Distributed Multimedia Systems (DMS2001),

pp. 486-9.
E-learning activity-based material
recommendation system
Feng-jung Liu
Department of Digital Arts and Multimedia Design, TAJEN University, Ping-Tung, Taiwan
Email: and
Bai-jiun Shih
Department of Management Information System, TAJEN University, Ping-Tung, Taiwan
Email:
Abstract
Purpose – Computer based systems have great potential for delivering learning material. However, problems are
encountered, such as: difficulty of Learning resource sharing, high redundancy of learning material, and deficiecy of
the course brief. In order to solve these problems, this paper aims to propose an automatic inquiring system for
learning materials which, utilize the data-sharing and fast searching properties of the Lightweight Directory Access
Protocol (LDAP) and JAVA Architecture for XML Binding (JAXB).
Design/methodology/approach – The paper describes an application to utilize the techniques of LDAP and JAXB to
reduce the load of search engines and the complexity of content parsing. Additionally, through analyzing the logs of
learners’ learning behaviors, the likely keywords and the association among the learning course contents is ascertained.
The integration of metadata of the learning materials in different platforms and maintenance in the LDAP server is specified.
Findings – As a general search engine, learners can search contents by using multiple keywords concurrently. The system
also allows learners to query by content creator, topic, content body and keywords to narrow the scope of materials.
Originality/value – Teachers can use this system more effectively in their education process to help them collect,
process, digest and analyze information.
Keywords: E-learning, Teaching aids
Paper type: Research paper
Interactive Technology and Smart Education (2007) 200–207
© Emerald Group Publishing Limited
1 INTRODUCTION
Computer-based systems have great potential for delivering
learning material (Masiello et al., 2005), which frees teach-

ers from handling mechanical matters so they can practice
far more humanized pedagogical thinking. However, infor-
mation comes from different sources embedded with
diverse formats in the form of metadata, making it trouble-
some for the computerized programming to create profes-
sional materials (Shih et al., 2007). The major problems are:
1. Difficulty of learning resource sharing. Even if all
E-earning systems follow the common standard,
users still have to visit individual platforms to gain
200 INTERACTIVE TECHNOLOGY AND SMART EDUCATION

×