We met up at the OU Monday to Wednesday this week for a development session we have got a lot done and advanced quite a bit – largely due to the work of our Spanish colleague David Roldan Alvarez. We have been doing quite a bit of redesign in the run up to launching a live service. We now have these services that Clipper will work with:
YouTube – done
Soundcloud – waiting on api permissions
Facebook – done
Vimeo – done
Podbean – done with user info
BBC radio – done with user info
M3U8 format – working
Dropbox – done – works with user guidance (remove the zero and add 1 to the end of the URL
That’s quite a list! It takes Clipper into really interesting territory as we go forwards. We are working on Microsoft OneDrive and Box as well as working over intranets and with DAM systems. We have a bit more to do before we update the current version. M3U8 is a big step its a format that used by a lot of museums and archives and presents a ‘barrier’ between a video stream URL and users for security. What this means is a big deal for us – if Clipper is whitelisted with an archive service it means Clipper users can then access the archive and create Clips and annotations and share them over the open / social web but the video stays where it is. For some archives this could be a game changer – by allowing more interactive user engagement and spreading this into the world of social media this could drive a lot of traffic to the archive web site and, importantly, generate masses of rich analytics – which in turn can justify the costs of the archive and even lead to new revenue streams.
And Facebook…that’s a big deal as it is a major video platform these days, this opens up some interesting possibilities in education and community learning.
We have been really fortunate to have gained a new team member. David Roldan Alvarez from University of Rey Juan Carlos (Madrid) is working on the Clipper code and project for 3 months as a Phd. research student visiting The Open University.
David has picked up the code and ideas really quickly and is working on adding new services to Clipper such as Vimeo and SoundCloud. This is really important for us and future developments. At the moment Clipper works well with YouTube and any online MP4 / MP3 file that a user can find the URL of (so this includes Podbean and many BBC audio services with a ‘download‘ option that links to an mp3 file. This is already a big step but having several other online services available makes Clipper look less like a ‘one trick pony’ and more like a general purpose web based audio-visual annotation tool.
Paradoxically it also helps us argue to persuade people to adopt Clipper as an add on for an existing service such as a digital archive or online museum or library collection. From previous workshops we know that there was a great interest in adding in Vimeo (e.g. The North West Film Archive and the National Library of Scotland and of the possibility of ‘in-house’ adoption of the code. Having a spread of online services like this helps to persuade people that Clipper can be adopted in a wide range of ways and contexts.
In this ‘longform’ blog post we consider some parallel development opportunities from our work on the Clipper Toolkit, based on what we have learnt in Phase 3 and reflecting on previous involvement in open education projects. Here we examine the feasibility of creating an authoring toolkit to make it easier to produce web-based open learning materials. That this is still a challenge may be surprising, given how long e-learning etc. has been around and has been promising to change our education systems.
Here, we examine some fundamental practical problems and discuss possible solutions. Along the way we consider the opportunities for changing the way we currently work in education to deliver true open education opportunities to students. In the process, we discuss how this is all closely linked to the key issues of Inclusion, Accessibility, Quality, Efficiency and Change.
An open education digital learning resource toolkit that:
Creates web native content that is accessible and works well with assistive technologies, is usable on smartphones and can be used offline
Takes content out of the VLE / LMS – link to it instead. This makes things much, much easier in relation to information management, quality assurance and maintenance etc. The VLE / LMS is then used for discussion, collaboration, grading and feedback etc.
Makes content available in different formats e.g.:
Online web microsites for directly linking to from a VLE etc.
Downloadable zipped web microsites for offline use
EPUB etc. formats for offline access via ebook readers etc.
PDF for offline use and printing
Enables collaborative workflows to jointly author content
Has a storage / DAM (Digital Asset Management) component of the system, linked to a ‘social layer’, where users can share and discuss their work and collaborate with each other (either publicly or privately) – here is a link to an online interactive early prototype designed for art colleges
Improves information management and quality control for both individuals and institutions
Generates rich analytics
Supports the sharing of content with different ‘onion skin’ levels of access; private / named individuals / groups / institutions / open to the web with a CC licence
Outline System Diagram
Working Title:open4ed (open publishing engine foreducation)
Skills Money and Time
In past projects we have used the MIT OCW site as a good example of making standard course materials openly available with a Creative Commons licence. It’s a great inspiring project, but we know that it takes a lot of resources to produce the content and present it that way. What we would like are tools and services to make it easier for users in education and eslewhere to author their own web-native ‘CourseWare’, not necessarily for open distribution but also for use in their own ‘internal’ college courses. Unfortunately, this is beyond the reach of many due to not having the necessary skills or time to use specialist (often expensive) software to create web content.
There is a big need for this. In our colleges, online learning resources tend to be loaded into the Virtual Learning Environment (VLE), such as Moodle, by lecturers in the form of Word documents and PowerPoint slides etc. The trouble with this is that is unusable for students who do not possess a computer with the proprietary software needed to view it and whose main means of accessing the Internet is via a smartphone. Even those students who do possess the necessary kit must go through all sorts of hassle to download / view the content. It would be much better if the content was ‘web-native’ from the start and everyone could view it. We are not the only ones struggling with this, the publishing industry is still trying to make the transition from paper to digital with many of their designers having to use tortuous workflows and expensive software to convert their existing work to online ‘webified’ versions.
This situation poses some fundamental practical problems for colleges who want to provide blended learning solutions for their students. Recent research (Offcom) is indicating that 30%+ of the UK population now rely on a smartphone as their main means of accessing the internet – often without a domestic broadband connection. This is certainly confirmed by recent learner analytics data from City of Glasgow College. This means that colleges are effectively locking out a large and growing demographic – especially those in workplace and in community learning contexts.
There is another serious problem with this situation and that is the accessibility of the content in the VLE for students with disabilities. Much of this content is not designed or formatted in such a way to work well with assistive devices. Whereas, if the content was truly web-based from the start the chances of better accessibility are dramatically increased.
Efficiency and Quality
Another issue is related to the way we currently use our VLEs, people tend to project old patterns of working onto using new technology. This results in the VLE being used as a personal digital library by lecturers to collect and deliver their own learning content (copied and altered) into multiple courses. This presents severe problems in information management, quality assurance and efficiency both for individual lecturers and from an institutional point of view. From a student perspective, access to these course learning resources can be highly variable in terms of content, consistency and quality as they cross individual lecturer’s personal VLE ‘silos’.
User Requirements: Learners – Teachers – Institutions
Based on what we have been discussing and through working with lecturers and students in vocational education institutions here is a short list.
Students need digital learning resources that are:
Usable on a smartphone
Accessible with assistive technologies
Downloadable for use offline
Printable as required
Convertible into eBooks /PDF etc. as needed
Lecturers and Instructors need tools and systems that can:
Enable the easy creation of learning resources that are web native from the start, usable on smartphones, and work with assistive technologies
Support different workflows:
Sole author doing everything
Subject expert authors with restricted rights, working with editors and designers to create high quality content
Take the content out of the VLE / LMS to facilitate sharing and collaboration by providing the means for granular access and sharing rights to review and co-create content (using an ‘onion skin’ metaphor for sharing) with:
Just me (private)
Open to the WWW (e.g. OERs, publicity materials etc.)
Provide the means of easy attribution and affiliation (for professional reputation and institutional benefit)
Colleges / Learning Providers need tools and systems that can:
Support the workflows needed to make blended learning economically viable
Enable the effective management and sharing of learning resources – outside the VLE / LMS
Offer the ability to link learning resources to the VLE / LMS
Gather useful analytical data
Provide learning resources that are accessible to disabled learners
Deliver learning resources that are usable on smartphones
Use file formats that enable long term access and reuse of content
Attach different licenses and employ simple digital rights management (DRM) methods
Share learning content with partner institutions
Publish learning resources on the open web – i.e. enable public access
Outline System Description
Inspired by the MIT OpenCourseWare initiative and innovative teaching teams in FE & HE
Uses a non-SQL database as the back-end with data documents encoded in json-ld / bson
Uses CSS / JS to create different styles of web content – articles, slideshows, magazines etc.
Content is saved as
JSON-LD documents for live editing
Mini website (HTML) online with a persistent URL (for linking to) in a web directory for serving and archiving
Downloadable zip file with self-contained website for offline viewing (and editing)
EPUB / PDF for offline viewing on e-readers and printing
The Storage / DAM component of the system is linked to a ‘social layer’ (see the schematic diagram above), where users can share and discuss their work and collaborate with each other – here is a link to an early protoype
You can register or log in with email@example.com and a password of clipper20. This produces mini websites that contain ‘virtual’ clip collections. Providing a starting basis for the content creation / conversion tool.
Storage / Sharing solutions
This can be a variety of options from an in-house repository / Digital Asset Management (DAM) solutions to external cloud services such as NFS (Network File Systems) such as those supplied by Amazon and others. A simple subject / course taxonomy is needed for browsing (MIT provides a good example) together with full text searching. The fact that the learning resources created by the content creation tool would be ‘well-structured documents’ would help with searching functions.
Based on previous (painful!) experience of working with older academic repository projects, keeping this part of the system as simple as possible is sensible. Content would be stored in simple web directories (apart from JSON-LD document which would be in the non-SQL database) this would be stored together with an XML/JSON metadata ‘ticket’ describing the content in the directory. In addition, the content metadata would be indexed in a database to facilitate searching. This way the system content would be both machine and human readable in the long term and facilitate resilience.
What this proposal is not
What this proposal is not – this is not a SCORM / Learning Object Editor (e.g. Articulate, Xerte, Adapt etc.). There are plenty of existing tools that do these things with varying degrees of success. In the college sector, only a small amount of learning resources are in these formats, often needing specialized staff to create, operate and support them. This proposal seeks to tackle the problems for students posed by most existing content in college VLEs as described in this post.
Why Not Use a CMS?
Content Management Systems (CMS’s) have been around a long time and some do a great job such as WordPress and Drupal that are popular and widely used, requiring varying levels of technical expertise to install, maintain and use. If they could meet the kind of needs described in this post our college VLEs / LMSs would already be full of web-based content that was accessible and usable on a smartphone. The reason CMSs do not provide a solution lies in their fundamental design, they are intended to make it easier to create and manage web content within their own framework. To varying degrees this means the content is trappedinside the CMS application (distributed inside its own file system) and can be difficult for users to export.
Extracting the content from the CMS as a self-contained web site is not something it was ever designed to do – the job of the CMS is to assemble the content and ‘play’ it in a web browser – the CMS provides the platform to present the content to the web. So, a fundamental difference is that our system is intended to create web content (micro-sites) outside of the authoring tool in order to be portable to be placed where we want. We also want our web content to be editable by other tools outside of our own content editor, by creating self-contained microsites with HTML as our native file format this becomes possible. This approach also lends itself to supporting the creation of different formats from the same content such as EPUB, PDF etc. for use offline and printing on paper. This is basically the same philosophy being used by O’Reilly Publishing to support its in-house print / electronic publishing operations – so we feel we are on the right track here.
Using a CMS to provide the ‘Social Layer’
However, we do see a role for CMSs in our proposed system. Not for content creation, but to provide a ‘social layer’ on top of the system, where users can share and discuss the learning resources that are in the system (with private and public modes) – particularly useful for those resources that are open to the web. A good starting model for this would be based on the system used by graphic designers to showcase their work – it’s called Behance. Although this is now based on a proprietary Adobe system, it would be possible to create something similar based on the Drupal CMS – in fact a few years back we worked on a prototype for this with a leading Drupal social media company in London Bright Lemon.
Why has this not been done / proposed before?
That, as they say, is a good question, it might sound strange in an ed-tech setting but we think the reasons are more sociological than technical. By that we mean that people and institutions can become conservative in their use of technologies (even tech people) and can be very loyal to technologies, platforms and commercial providers for a whole host of reasons. You can see this dynamic play out every day when proposing a change to an institutional IT Department. In the UK, this has led to the VLE / LMS becoming embedded as the central feature of providing blended learning activity with vendors adding new features and expanding functionality all the time to keep their customers ‘hooked’. In this situation, it can be very difficult to change, or even imagine a change.
Not a Repository?
Previously we have worked with academic repository systems designed fordigital research papers trying to wrangle them into something useful for learning and teaching. There were numerous problems with this – from sweeping assumptions that they would naturally just work for learning resources, lack of viable developer communities in terms of numbers (important for scale), open code that could only be interpreted / used by the original authors, poor interface / UX, and an obsession with entering metadata. Below is a very short video presentation reflecting on these earlier experiences and the beginning of the ideas presented in this post;
The project phase would be organised and managed by a consortium, with the outputs (code and learning resources) being openly licensed to achieve maximum impact.
Service and Sustainability
In the service phase, with wider take up the service would remain free to college users and be paid for at source by existing national services as part of the consortium. The design of the system itself would ensure low service running costs – with little call being made on computing resources by the client side authoring tool components. The service would be operated by a sub-set of the original project consortium. The project and service would also act as an ongoing demonstrator of what is possible using technology to enable and drive the wider fundamental cultural changes that are needed in the sector (see the Change section below). To be clear, these changes involve supporting the collaborative workflows between academics and institutions that are needed to develop an economically sustainable model to create, manage, and share the digital learning resources needed to effectively deliver open / flexible / blended learning by the college sector.
With the system code being open source and using popular OS components it will be entirely possible for institutions to take and adopt the system for their own use at a pace determined by their appetite for new technologies and approaches. We would envisage that both a national service would continue together with institutional adoption – a twin track approach. The service providers would be able to generate income from consultancy activities assisting organisations adopt the toolkit – in relation to integration and training. There is also the potential for this system to be attractive to users in the wider public and private sectors, for instance in relation to training and corporate communications. This in turn could provide revenue for the service providers. This is, of course, an optimistic view; much would depend on actual adoption by a core of users from the original project consortium to encourage others and effective dissemination and promotion.
In traditional F2F education, lecturers and instructors tend to learn their ‘craft’ on the job and in the process the learning materials they create for their students records their own developing teaching and subject knowledge and acts as a touchstone to guide their own practice – a kind of professional ‘life support pack’. So, not surprisingly, they can become deeply attached to these resources
Unfortunately, these traditional F2F methods of individual learning material creation do not scale to support blended / open / flexible learning and can obstruct the new work flows that are needed. Our current use of VLEs/ LMSs tends to reflect a traditional ‘silo’ model of teaching and obstructs the move to team teaching that is needed to make new forms of digital learning economical.
It is not surprising that this tension between traditional teaching modes and new blended modes exists. They each require a different organisation of academic work – or a division of labour if you like. Colleges (and Universities) currently struggle to accommodate these different approaches as most of their practice is still in F2F mode and this is reinforced by things like, timetables, assessment modes, funding models, employment contracts and even student expectations. These ‘systemic’ factors are often overlooked in the research literature and downplayed in the commercial hype that sometimes dominates discourse about e-learning.
To understand the scale of this tension it helps to look at the economic models and workflows used by existing distance providers such as the Open University. In this model the learning resources are designed to take some of the pedagogic load that would normally be supplied by F2F contact. Here, there is a greater up-front investment in course design in relation to planning learner activities and paths and accompanying learning resource design. Typically, in this model a course has to run 5-7 times before it recoups its costs. This is at the other extreme of the spectrum from traditional campus-based education and operates in a radically different way. In this scenario all learning resources are jointly designed and managed centrally – this represents a key cultural move from ‘my course and my resources’ to ‘our course and our resources’
Moving towards an effective and economically sustainable blended / flexible learning model requires a move along the pedagogic spectrum towards distance learning practice – especially in relation to course and learning resource design and their related collaborative workflows. Diana Laurillard argues that there is a need rethink teaching in the 21st century as a design intensive profession in order to make the best use of technology and meet the challenges we face in society. These proposals are intended to play their part in supporting this transition in teaching practice by addressing some of the very practical issues involved.
Students and Learners
So far we have been discussing supplying students and learners with better learning resources and empowering academics and institutions to do this. This is important for the reasons set out here. However, what we are proposing is creating a general-purpose web content authoring tool that can support collaborative workflows, managing and sharing content and integration with the world of social media. Making this system available to students opens up a host of opportunities for education, peer-to-peer learning and the possibility of creating a truly independent portfolio / journal system that users can take on and offline at will and is totally under their own control.
Trevor and myself attended the IAnnotate web annotation conference in Berlin this week, having been kindly alerted to it by colleagues at EUSCREEN. Having previously encountered the image annotation standard IIIF with colleagues from Digirati in the UK. Previous experience with standards had made us a little wary as sometimes standards work can lose contact with practical everyday experience and become an expensive end in their own right, consuming vast resources but leading nowhere, – my own experience with educational interoperability standards confirms that :-).
So, we were beware of getting entangled in a standards runaway – as it happens some of the other participants had similar reservations about past standards initiatives, including W3C ones. However, our experience of attending the W3C working group briefing on the development of the web annotation standards was like a great of fresh air. One statement in particular stuck in my mind – it went something like
“Look, we don’t care what you do inside your own [web annotation] systems, but when you come to share your data with the outside world it makes sense to do it in a standardised way – so that others can make sense of it and use it”
This was the turning point for me – the little thing that revealed the intent – that and the fact that the proposed standard is admirably practical, light weight and makes useful reuse of other W3C standards such as media fragments. Believe it or not I have seen developers and designers trying to adopt a heavy standard internally in their systems in a slavish and sometimes pedantic manner – leading to what might be most charitably described as ‘sub optimal outcomes’.
So, a great result for us from attendance at the conference – we also get a ready made data model that we can adopt and build on without having to dream up our own that also makes compliance with the emerging W3C web annotation standards easier and more useful.
Last week the Clipper team participated in an invited workshop at the British Library, organised by the International Image Interoperability Framework (IIIF) consortium. The purpose of the workshop was to collate use cases and start outlining a development road map for extending the IIIF to include support for Audio/Video annotation. This was a great opportunity to find out more about the IIIF and the collaborative design process that has produced it.