Development Workshop OU 27 February – 1 March

We met up at the OU Monday to Wednesday this week for a development session we have got a lot done and advanced quite a bit – largely due to the work of our Spanish colleague David Roldan Alvarez. We have been doing quite a bit of redesign in the run up to launching a live service. We now have these services that Clipper will work with:

  • YouTube – done
  • Soundcloud – waiting on api permissions
  • Facebook – done
  • Vimeo – done
  • Podbean – done with user info
  • BBC radio – done with user info
  • M3U8 format – working
  • Dropbox – done – works with user guidance (remove the zero and add 1 to the end of the URL

That’s quite a list! It takes Clipper into really interesting territory as we go forwards. We are working on Microsoft OneDrive and Box as well as working over intranets and with DAM systems. We have a bit more to do before we update the current version. M3U8 is a big step its a format that used by a lot of museums and archives and presents a ‘barrier’ between a video stream URL and users for security. What this means is a big deal for us – if Clipper is whitelisted with an archive service it means Clipper users can then access the archive and create Clips and annotations and share them over the open / social web but the video stays where it is. For some archives this could be a game changer – by allowing more interactive user engagement and spreading this into the world of social media this could drive a lot of traffic to the archive web site and, importantly, generate masses of rich analytics – which in turn can justify the costs of the archive and even lead to new revenue streams.

And Facebook…that’s  a big deal as it is a major video platform these days, this opens up some interesting possibilities in education and community learning.

Clipper Checklist – love ticking these off! this shows Facebook and M3U8 amongst others

 

 

Adding new online collections to Clipper: Vimeo & SoundCloud in progress

We have been really fortunate to have gained a new team member. David Roldan Alvarez from University of Rey Juan Carlos (Madrid) is working on the Clipper code and project for 3 months as a Phd. research student visiting The Open University.

David has picked up the code and ideas really quickly and is working on adding new services to Clipper such as Vimeo and SoundCloud. This is really important for us and future developments. At the moment Clipper works well with YouTube and any online MP4 / MP3 file that a user can find the URL of (so this includes Podbean and  many BBC audio services with a ‘download‘ option that links to an mp3 file. This is already a big step but having several other online services available makes Clipper look less like a ‘one trick pony’ and more like a general purpose web based audio-visual annotation tool.

Paradoxically it also helps us argue to persuade people to adopt Clipper as an add on for an existing service such as a digital archive or online museum or library collection. From previous workshops we know that there was a great interest in adding in Vimeo (e.g. The North West Film Archive and the National Library of Scotland and of the possibility of ‘in-house’ adoption of the code. Having a spread of online services like this helps to persuade people that Clipper can be adopted in a wide range of ways and contexts.

Our recent workshop at the OU - David right get introduced to Clipper humour!
Our recent workshop at the OU – David right get introduced to Clipper humour!

Branching Out: An Open Education Toolkit

Introduction

In this ‘longform’ blog post we consider some parallel development opportunities from our work on the Clipper Toolkit, based on what we have learnt in Phase 3 and reflecting on  previous involvement in open education projects. Here we examine the feasibility of creating an authoring toolkit to make it easier to produce web-based open learning materials. That this is still a challenge may be surprising, given how long e-learning etc. has been around and has been promising to change our education systems.

Here, we examine some fundamental practical problems and discuss possible solutions. Along the way we consider the opportunities for changing the way we currently work in education to deliver true open education opportunities to students. In the process, we discuss how this is all closely linked to the key issues of Inclusion, Accessibility, Quality,  Efficiency and Change.

Image of a prototype content creation tool
An early prototype web content creation tool – showing an MIT OCW ‘clone’. Modelled on the original MIT site as a starting point. Notice the additional features such as the EPUB option and the ”licence picker

Vision

An open education digital learning resource toolkit that:

  • Creates web native content that is accessible and works well with assistive technologies, is usable on smartphones and can be used offline
  • Takes content out of the VLE / LMS – link to it instead. This makes things much, much easier in relation to information management, quality assurance and maintenance etc. The VLE / LMS is then used for discussion, collaboration, grading and feedback etc.
  • Makes content available in different formats e.g.:
    • Online web microsites for directly linking to from a VLE etc.
    • Downloadable zipped web microsites for offline use
    • EPUB etc. formats for offline access via ebook readers etc.
    • PDF for offline use and printing
  • Enables collaborative workflows to jointly author content
  • Has a storage / DAM (Digital Asset Management) component of the system, linked to a ‘social layer’, where users can share and discuss their work and collaborate with each other (either publicly or privately) – here is a link to an online interactive early prototype designed for art colleges
  • Improves information management and quality control for both individuals and institutions
  • Generates rich analytics
  • Supports the sharing of content with different ‘onion skin’ levels of access; private / named individuals / groups / institutions / open to the web with a CC licence

Outline System Diagram

System Architecture Working

Working Title: open4ed (open publishing engine for education)

open4ed logo

Problems

Skills Money and Time

In past projects we have used the MIT OCW site as a good example of making standard course materials openly available with a Creative Commons licence. It’s a great inspiring project, but we know that it takes a lot of resources to produce the content and present it that way. What we would like are tools and services to make it easier for users in education and eslewhere to author their own web-native ‘CourseWare’, not necessarily for open distribution but also for use in their own ‘internal’ college courses. Unfortunately, this is beyond the reach of many due to not having the necessary skills or time to use specialist (often expensive) software to create web content.

Digital Inclusion

There is a big need for this. In our colleges, online learning resources tend to be loaded into the Virtual Learning Environment (VLE), such as Moodle, by lecturers in the form of Word documents and PowerPoint slides etc. The trouble with this is that is unusable for students who do not possess a computer with the proprietary software needed to view it and whose main means of accessing the Internet is via a smartphone. Even those students who do possess the necessary kit must go through all sorts of hassle to download / view the content. It would be much better if the content was ‘web-native’ from the start and everyone could view it. We are not the only ones struggling with this, the publishing industry is still trying to make the transition from paper to digital with many of their designers having to use tortuous workflows and expensive software to convert their existing work to online ‘webified’ versions.

This situation poses some fundamental practical problems for colleges who want to provide blended learning solutions for their students. Recent research (Offcom) is indicating that 30%+ of the UK population now rely on a smartphone as their main means of accessing the internet – often without a domestic broadband connection. This is certainly confirmed by recent learner analytics data from City of Glasgow College. This means that colleges are effectively locking out a large and growing demographic – especially those in workplace and in community learning contexts.

Accessibility

There is another serious problem with this situation and that is the accessibility of the content in the VLE for students with disabilities. Much of this content is not designed or formatted in such a way to work well with assistive devices. Whereas, if the content was truly web-based from the start the chances of better accessibility are dramatically increased.

Efficiency and Quality

Another issue is related to the way we currently use our VLEs, people tend to project old patterns of working onto using new technology. This results in the VLE being used as a personal digital library by lecturers to collect and deliver their own learning content (copied and altered) into multiple courses. This presents severe problems in information management, quality assurance and efficiency both for individual lecturers and from an institutional point of view. From a student perspective, access to these course learning resources can be highly variable in terms of content, consistency and quality as they cross individual lecturer’s personal VLE ‘silos’.

Content Creation Tool Dashboard with options for sharing and downloading in different formats (zip, EPUB) and viewing / linking to the microsite.

Solutions

User Requirements: Learners – Teachers – Institutions

Based on what we have been discussing and through working with lecturers and students in vocational education institutions here is a short list.

Students need digital learning resources that are:

  • Usable on a smartphone
  • Accessible with assistive technologies
  • Downloadable for use offline
  • Printable as required
  • Convertible into eBooks /PDF etc. as needed

Lecturers and Instructors need tools and systems that can:

  • Enable the easy creation of learning resources that are web native from the start, usable on smartphones, and work with assistive technologies
  • Support different workflows:
    • Sole author doing everything
    • Subject expert authors with restricted rights, working with editors and designers to create high quality content
  • Take the content out of the VLE / LMS to facilitate sharing and collaboration by providing the means for granular access and sharing rights to review and co-create content (using an ‘onion skin’ metaphor for sharing) with:
    • Just me (private)
    • Named individuals
    • Groups
    • My Institution
    • Between Institutions
    • Open to the WWW (e.g. OERs, publicity materials etc.)
  • Provide the means of easy attribution and affiliation (for professional reputation and institutional benefit)

Colleges / Learning Providers need tools and systems that can:

  • Support the workflows needed to make blended learning economically viable
  • Enable the effective management and sharing of learning resources – outside the VLE / LMS
  • Offer the ability to link learning resources to the VLE / LMS
  • Gather useful analytical data
  • Provide learning resources that are accessible to disabled learners
  • Deliver learning resources that are usable on smartphones
  • Use file formats that enable long term access and reuse of content
  • Attach different licenses and employ simple digital rights management (DRM) methods
  • Share learning content with partner institutions
  • Publish learning resources on the open web – i.e. enable public access
Early Design Sketches from Workshops at London Art Colleges
Early Design Sketches from Workshops in London Art Colleges – the beginning of the onion skin idea of sharing access levels.

Outline System Description

Overview

System Outline

  • Uses a non-SQL database as the back-end with data documents encoded in json-ld / bson
  • Uses Javascript framework (e.g.Angular 2) for the front-end content creation tool
  • Uses CSS / JS to create different styles of web content – articles, slideshows, magazines etc.
  • Content is saved as
    • JSON-LD documents for live editing
    • Mini website (HTML) online with a persistent URL (for linking to) in a web directory for serving and archiving
    • Downloadable zip file with self-contained website for offline viewing (and editing)
    • EPUB / PDF for offline viewing on e-readers and printing
  • The Storage / DAM component of the system is linked to a ‘social layer’ (see the schematic diagram above), where users can share and discuss their work and collaborate with each other – here is a link to an early protoype

Related Prototypes / Proof of Concept

Early proof of concept – part working http://reachwill.co.uk/opencoursebook2/

Examples of content created from the old prototype

Current work – Clipper – https://clipperdev.com/#/editor – based on MongoDB, JSON-LD, Node.js, Express.js, NGINX, Webpack,  and Angular 2

You can register or log in with clipper20@clippertube.com and a password of clipper20. This produces mini websites that contain ‘virtual’ clip collections. Providing a starting basis for the content creation / conversion tool.

Storage / Sharing solutions

This can be a variety of options from an in-house repository / Digital Asset Management (DAM) solutions to external cloud services such as NFS (Network File Systems) such as those supplied by Amazon and others. A simple subject / course taxonomy is needed for browsing (MIT provides a good example) together with full text searching. The fact that the learning resources created by the content creation tool would be ‘well-structured documents’ would help with searching functions.

The Course Taxonomy
The MIT Course Taxonomy

Based on previous (painful!) experience of working with older academic repository projects, keeping this part of the system as simple as possible is sensible. Content would be stored in simple web directories (apart from JSON-LD document which would be in the non-SQL database) this would be stored together with an XML/JSON metadata ‘ticket’ describing the content in the directory. In addition, the content metadata would be indexed in a database to facilitate searching. This way the system content would be both machine and human readable in the long term and facilitate resilience.

Discussions

What this proposal is not

What this proposal is not – this is not a SCORM / Learning Object Editor (e.g. Articulate, Xerte, Adapt etc.). There are plenty of existing tools that do these things with varying degrees of success. In the college sector, only a small amount of learning resources are in these formats, often needing specialized staff to create, operate and support them. This proposal seeks to tackle the problems for students posed by most existing content in college VLEs as described in this post.

Why Not Use a CMS?

Content Management Systems (CMS’s) have been around a long time and some do a great job such as WordPress and Drupal that are popular and widely used, requiring varying levels of technical expertise to install, maintain and use. If they could meet the kind of needs described in this post our college VLEs / LMSs would already be full of web-based content that was accessible and usable on a smartphone. The reason CMSs do not provide a solution lies in their fundamental design, they are intended to make it easier to create and manage web content within their own framework. To varying degrees this means the content is trapped inside the CMS application (distributed inside its own file system) and can be difficult for users to export.

Extracting the content from the CMS as a self-contained web site is not something it was ever designed to do – the job of the CMS is to assemble the content and ‘play’ it in a web browser – the CMS provides the platform to present the content to the web. So, a fundamental difference is that our system is intended to create web content (micro-sites) outside of the authoring tool in order to be portable to be placed where we want. We also want our web content to be editable by other tools outside of our own content editor, by creating self-contained microsites with HTML as our native file format this becomes possible. This approach also lends itself to supporting the creation of different formats from the same content such as EPUB, PDF etc. for use offline and printing on paper. This is basically the same philosophy being used by O’Reilly Publishing to support its in-house print / electronic publishing operations – so we feel we are on the right track here.

Using a CMS to provide the ‘Social Layer’

However, we do see a role for CMSs in our proposed system. Not for content creation, but to provide a ‘social layer’ on top of the system, where users can share and discuss the learning resources that are in the system (with private and public modes) – particularly useful for those resources that are open to the web. A good starting model for this would be based on the system used by graphic designers to showcase their work – it’s called Behance. Although this is now based on a proprietary Adobe system, it would be possible to create something similar based on the Drupal CMS – in fact a few years back we worked on a prototype for this with a leading Drupal social media company in London Bright Lemon.

OPEN4ED 'Social Layer' web site - based on Drupal
OPEN4ED ‘Social Layer’ web site – interactive online prototype – available at this link http://open4ed.digitalinsite.co.uk/

Why has this not been done / proposed before?

That, as they say, is a good question, it might sound strange in an ed-tech setting but we think the reasons are more sociological than technical. By that we mean that people and institutions can become conservative in their use of technologies (even tech people) and can be very loyal to technologies, platforms and commercial providers for a whole host of reasons. You can see this dynamic play out every day when proposing a change to an institutional IT Department. In the UK, this has led to the VLE / LMS becoming embedded as the central feature of providing blended learning activity with vendors adding new features and expanding functionality all the time to keep their customers ‘hooked’. In this situation, it can be very difficult to change, or even imagine a change.

Not a Repository?

Previously we have worked with academic repository systems designed fordigital research  papers trying to wrangle them into something useful for learning and teaching. There were numerous problems with this – from sweeping assumptions that they would naturally just work for learning resources, lack of viable developer communities in terms of numbers (important for scale), open code that could only be interpreted / used by the original authors, poor interface / UX, and an obsession with entering metadata. Below is a very short video presentation reflecting on these earlier experiences and the beginning of the ideas presented in this post;

ALTO UK Overview with Fruit and Veg! from Teaching and Learning Exchange on Vimeo.

Project & Service Phases and Sustainability

Project

In the world outside the college / university VLE / IT Department, technological change has been proceeding extremely quickly with things like HTML5, Semantic Web, Cloud Services, Non-SQL databases, Javascript frameworks, Node.js etc. driving things forwards.  This means what we are proposing here is now quite feasible. For reasons of speed, economy and agility in the project phase we would propose to create the service in the cloud, with college users able to login and use the service freely. The utility of what the system can do would be a strong driver for take-up and this would be accompanied by a programme of community engagement to guide system co-development and disseminate our vision. From the start the system would feature some public openly licensed collections (using a CC 0 licence) of learning resources for vocational learning that could be used by anyone, potentially being the only such public online collection in the UK.

The project phase would be organised and managed by a consortium, with the outputs (code and learning resources) being openly licensed to achieve maximum impact.

Service and Sustainability

In the service phase, with wider take up the service would remain free to college users and be paid for at source by existing national services as part of the consortium. The design of the system itself would ensure low service running costs – with little call being made on computing resources by the client side authoring tool components. The service would be operated by a sub-set of the original project consortium. The project and service would also act as an ongoing demonstrator of what is possible using technology to enable and drive the wider fundamental cultural changes that are needed in the sector (see the Change section below). To be clear, these changes involve supporting the collaborative workflows between academics and institutions that are needed to develop an economically sustainable model to create, manage, and share the digital learning resources needed to effectively deliver open / flexible / blended learning by the college sector.

With the system code being open source and using popular OS components it will be entirely possible for institutions to take and adopt the system for their own use at a pace determined by their appetite for new technologies and approaches. We would envisage that both a national service would continue together with institutional adoption – a twin track approach. The service providers would be able to generate income from consultancy activities assisting organisations adopt the toolkit – in relation to integration and training. There is also the potential for this system to be attractive to users in the wider public and private sectors, for instance in relation to training and corporate communications. This in turn could provide revenue for the service providers. This is, of course, an optimistic view; much would depend on actual adoption by a core of users from the original project consortium to encourage others and effective dissemination and promotion.

Changes

In traditional F2F education, lecturers and instructors tend to learn their ‘craft’ on the job and in the process the learning materials they create for their students records their own developing teaching and subject knowledge and acts as a touchstone to guide their own practice – a kind of professional ‘life support pack’. So, not surprisingly, they can become deeply attached to these resources

Unfortunately, these traditional F2F methods of individual learning material creation do not scale to support blended / open / flexible learning and can obstruct the new work flows that are needed. Our current use of VLEs/ LMSs tends to reflect a traditional ‘silo’ model of teaching and obstructs the move to team teaching that is needed to make new forms of digital learning economical.

It is not surprising that this tension between traditional teaching modes and new blended modes exists. They each require a different organisation of academic work – or a division of labour if you like. Colleges (and Universities) currently struggle to accommodate these different approaches as most of their practice is still in F2F mode and this is reinforced by things like, timetables, assessment modes, funding models, employment contracts and even student expectations. These ‘systemic’ factors are often overlooked in the research literature and downplayed in the commercial hype that sometimes dominates discourse about e-learning.

To understand the scale of this tension it helps to look at the economic models and workflows used by existing distance providers such as the Open University. In this model the learning resources are designed to take some of the pedagogic load that would normally be supplied by F2F contact. Here, there is a greater up-front investment in course design in relation to planning learner activities and paths and accompanying learning resource design. Typically, in this model a course has to run 5-7 times before it recoups its costs. This is at the other extreme of the spectrum from traditional campus-based education and operates in a radically different way. In this scenario all learning resources are jointly designed and managed centrally – this represents a key cultural move from ‘my course and my resources’ to ‘our course and our resources’

Moving towards an effective and economically sustainable blended / flexible learning model requires a move along the pedagogic spectrum towards distance learning practice – especially in relation to course and learning resource design and their related collaborative workflows. Diana Laurillard argues that there is a need rethink teaching in the 21st century as a design intensive profession in order to make the best use of technology and meet the challenges we face in society. These proposals are intended to play their part in supporting this transition in teaching practice by addressing some of the very practical issues involved.

Students and Learners

So far we have been discussing supplying students and learners with better learning resources and empowering academics and institutions to do this. This is important for the reasons set out here. However, what we are proposing is creating a general-purpose web content authoring tool that can support collaborative workflows, managing and sharing content and integration with the world of social media. Making this system available to students opens up a host of opportunities for education, peer-to-peer learning and the possibility of creating a truly independent portfolio / journal system that users can take on and offline at will and is totally under their own control.

Clipper Jisc RDN workshop, Cambridge 6th September 2016 – sparking ideas

I attended a very busy and interesting meeting of the Jisc RDN (Research Data Network) and gave a presentation about our work in the Clipper project. Much of the attendees were involved with the Jisc shared service pilots in this area. The event was held in the historic Corpus Christi college and the main plenaries were held in the Mcrum Lecture Theatre – up a side alley from the famous Eagle pub (where I had a very fine pint of Greene King IPA – after work). You never know what may turn up at these events and it pays to keep an open mind about possible connections, this was one of those days when sparks seemed to fly from different ideas.

showing the overall between web annotation and data citation
Schematic showing the overlaps between web annotation and data citation

The day began with a really interesting and though provoking keynote from Danny Kingsley – the Head of Scholarly Communications at Cambridge. During this she mentioned the challenges presented by time based data such as audio and video (Clipper I thought!). But Danny also mentioned the growing field of data citation and the challenges this presented. This created Spark No.1 – I though to myself – well Clipper is actually a form of data citation – specialising in time based data (citing parts of a web data resource via a URI and making some comments about it in context).

But the more I thought about this as I sat in the lecture theatre I started to scribble some notes. Clipper is also a web annotation tool that is using emerging W3C standards in this area so that standard provides a nice potential for a vehicle to create and transport data citations more generally. This then got me thinking about the work we have been doing with the Roslin Institute at Edinburgh University in the project (see the draft ‘Clipper Snapshot Case Studies‘ document) where we discussed linking Clipper annotations to the DataCite DOIs ‘minted’ by Roslin for their data that linked to the time based media files we were annotating. The DOIs provide the provenance of the data we are ‘clipping’ and annotating, it seemed to make a lot of sense then in the Clipper project and perhaps now in the wider field of general data citation. After all, the content of a W3C web annotation can carry any information we like so it should be able to accommodate all disciplines and emerging data citation formats?

I was musing about this at the lunch break when I briefly bumped into Neil Jefferies (Head of Innovation at the Bodleian Library Oxford) who I knew from the Jisc Data Spring Programme. I was explaining these ideas to him when he added the idea of using the ORCID standard into the  mix to identify researchers and link them to their data – so that was Spark No.2. It’s an attractive idea – use existing standards (DOI, ORCID) with the soon to be standard W3C Web Annotation data model as a means of creation and transport for data citation. One of the advantages of this is that the citations themselves would be easily shared on the web and so accessible by search engines and analytics services.

Perhaps at some point it would be useful to do some pilot work in this area…

Some images from the Cambridge event  are below and here is the slidshare version of our workshop

Addendum: Neil got back in touch and suggested I look at the subject of ‘nano pubs’ – at first , I have to confess I thought of micro breweries! But a search showed up this link

http://nanopub.org/wordpress/?page_id=65

It seems to map nicely onto what we have been discussing…hopefully to be continued.

Images from the RDN event are below

 

Where the Clipper project workshop was held
Where the Clipper project workshop was held – the ‘new’ part of Corpus Christi College
rdn1
The old part of the Corpus Christi College where the other workshops were held

 

 

 

 

 

 

The Corpus Christi Dining Hall at lunchtime.
The Corpus Christi Dining Hall at lunchtime.

It’s the little things…Clipper & the W3C at Berlin

Trevor and myself attended the IAnnotate web annotation conference in Berlin this week, having been kindly alerted to it by colleagues at EUSCREEN. Having previously encountered the image annotation standard IIIF with colleagues from Digirati in the UK. Previous experience with standards had made us a little wary as sometimes standards work can lose  contact with practical everyday experience and become an expensive end in their own right, consuming vast resources but leading nowhere, – my own experience with educational interoperability standards confirms that  :-).

So, we were beware of getting entangled in a standards runaway – as it happens some of the other participants had similar reservations about past standards initiatives, including W3C ones. However, our experience of attending the W3C working group briefing on the development of the web annotation standards was like a great of fresh air. One statement in particular stuck in my mind – it went something like

“Look, we don’t care what you do inside your own [web annotation] systems, but when you come to share your data with the outside world it makes sense to do it in a standardised way – so that others can make sense of it and use it”

This was the turning point for me – the little thing that revealed the intent – that and the fact that the proposed standard is admirably practical, light weight and makes useful reuse of other W3C standards such as media fragments. Believe it or not I have seen developers and designers trying to adopt a heavy standard internally in their systems in a slavish and sometimes pedantic manner – leading to what might be most charitably described as ‘sub optimal outcomes’.

So, a great result for us from attendance at the conference – we also get a ready made data model that we can adopt and build on without having to dream up our own that also makes compliance with the emerging W3C web annotation standards easier and more useful.

John

 

Clipper @ I Annotate 2016

This week John and Trevor are attending the I Annotate 2016 conference in Berlin here is a link to the PDF of our presentation. The last 4 slides describe the new technical architecture of Clipper. We think it will fit well with the world of annotating the web, we are very much looking forwards to finding out about this area of web development as it fits so well with our plans and we hope our conference colleagues will our work interesting / useful.

Down the Rabbit Hole

In this third stage of Clipper Development we have, after some discussion, decided to change the technical infrastructure we have been using (javascript, SQL PHP) to a more modern, powerful and scalable set of technologies (Angular2, MongoDB, NodeJS, JSON-LD). This comes at a price some of it is very new  and still evolving (Angular2) and others are new to us as technologists and developers. In small team with fixed project time limits this presents us with risks and extremely steep learning curves. Our first encounters in creating a stripped down test version (‘Clipper Lite’) have confirmed this, yet we think the potential benefits outweigh the risk for future development benefits (speed – eventually!)  and  other related products and services we can create on the same foundation.

Hence the title of this post:

“Down the rabbit hole”, a metaphor for an entry into the unknown, the disorienting or the mentally deranging, from its use in Alice’s Adventures in Wonderland

Addendum – September 13 2016

It seems to have paid off we are making some great progress now and entering a testing cycle before releasing a beta service and code for evaluation

Clipper @ IIIF Audio/Video Workshop

IIIF AV Workshop attendees photograph
IIIF AV Workshop attendees

Last week the Clipper team participated in an invited workshop at the British Library, organised by the International Image Interoperability Framework (IIIF) consortium. The purpose of the workshop was to collate use cases and start outlining a development road map for extending the IIIF to include support for Audio/Video annotation. This was a great opportunity to find out more about the IIIF and the collaborative design process that has produced it.

Continue reading

Clipper Reloaded: Phase 3 Begins

A quick note to say that we have been successful in getting funded for the 3rd phase of development as part of the Jisc Research Data Spring competition. This is a great achievement and we are looking forwards to getting on with the the next phase of work. Our proposals to the judges are here although our planned times will certainly change due to the delay in the decision making process (it was due before the end of 2015).

We will be considering our technical development options and are likely to change our technical architecture in this phase away from Javascript, PHP and SQL and adopt Angular 2, MongoDB, Noode JS. This is quite a big change – hence the ‘Reloaded’ tag –  and not without its risks but we think the benefits are very strong. We shall be quiet for the next few weeks as we dig into these issues.

Open University Workshop Videos

On Friday the 27th November we held a Clipper project meeting at the OU and then followed it with 2 workshops that were also videoed and webcast live over the internet by the OU. It was a long day but very productive. The workshops were held at the Knowledge Media Institute, Open University, Milton Keynes.

IIIF Workshop

The first workshop was delivered by Tom Crane of Digerati, with whom we have been discussing what  technical standards to include in the Clipper project. The subject of the workshop was the International Image Interoperability Framework (IIIF), we have been discussing how this might be extended to cover annotating audio and video resources. You can find the webcast at this link http://stadium.open.ac.uk/2620

Clipper Workshop

The second workshop was a short overview of the Clipper project, based on our previous community engagement workshops, followed by a question and answer session. You can find the webcast at this link http://stadium.open.ac.uk/2624