Development Workshop OU 27 February – 1 March

We met up at the OU Monday to Wednesday this week for a development session we have got a lot done and advanced quite a bit – largely due to the work of our Spanish colleague David Roldan Alvarez. We have been doing quite a bit of redesign in the run up to launching a live service. We now have these services that Clipper will work with:

  • YouTube – done
  • Soundcloud – waiting on api permissions
  • Facebook – done
  • Vimeo – done
  • Podbean – done with user info
  • BBC radio – done with user info
  • M3U8 format – working
  • Dropbox – done – works with user guidance (remove the zero and add 1 to the end of the URL

That’s quite a list! It takes Clipper into really interesting territory as we go forwards. We are working on Microsoft OneDrive and Box as well as working over intranets and with DAM systems. We have a bit more to do before we update the current version. M3U8 is a big step its a format that used by a lot of museums and archives and presents a ‘barrier’ between a video stream URL and users for security. What this means is a big deal for us – if Clipper is whitelisted with an archive service it means Clipper users can then access the archive and create Clips and annotations and share them over the open / social web but the video stays where it is. For some archives this could be a game changer – by allowing more interactive user engagement and spreading this into the world of social media this could drive a lot of traffic to the archive web site and, importantly, generate masses of rich analytics – which in turn can justify the costs of the archive and even lead to new revenue streams.

And Facebook…that’s  a big deal as it is a major video platform these days, this opens up some interesting possibilities in education and community learning.

Clipper Checklist – love ticking these off! this shows Facebook and M3U8 amongst others

 

 

Widening the Clipper Net

At the moment we can clip YouTube content and any online files we have a direct link to but we need to widen the net to other popular content sources to be useful in educational settings as well as for digital researchers.

Now that we have the SSL development site up at clipperdev.com I have been starting another  cycle of testing. One of the things we have been thinking about is adding other online audio-visual collections. Early indications are encouraging with the likes of Dropbox (good for personal sharing), Soundcloud and Podbean. Vimeo is  bit tricky with the ability to define a start and stop point a bit tricky but we have some ideas for workarounds.

Back in the City of Glasgow College we have our own internal collection of online videos for learning resources and it look like we can ‘clipperise’ them as well but there are complications with the use of the citrix thin client and the way it seems to handle clipper share links.

One thing for certain is that we have to have auto-attribution of the content being served via Clipper and some identification of the source web site – probably not the whole path to the video or audio file but the general source website. The legal issues are not as bad as it might appear at first the main one is attribution and credit of the source and to not appear to ‘pass off’ (in copyright terms) the content as the Clipper users – we can have auto content that makes that clear. I will come back to this post to record the results of testing and the likely alterations we will be making.

SSL Progress

A good way to start the new year! We have got the new secure development site working at https://clipperdev.com/. It has taken us a while as this is the first time we have deployed a secure site. This is a big step for us as secure sites are becoming more demanded by many institutional network managers especially those in education using eduroam – and we know companies like Google are pushing for this as well. This means we shall be able to offer a trial service to colleagues on campus at City of Glasgow College and the The OU.

I will be writing this all up shortly in the Final Report for Phases 1/2/3. We have certainly covered a lot of ground in the last 12 months in every sense of the word.

Research Data Spring Showcase  University of Birmingham

Just arrived in Brum to meet colleagues interested in RDM  tomorrow at the Jisc RDS showcase eventr #dataspring. Looking forwards to hearing what other folk are doing and catching up.

We will be giving a lightening / flask talk followed by a networking / chat session to demo the app and talk about future plans and our experience of working with the technologies we have chosen.

MECCSA Practice Based Research Symposium, Edinburgh Napier University

On Monday the 13th of June we present a short introduction to the Clipper project at the MECCSA Practice Based Research Symposium, Edinburgh Napier University. This academic grouping is one of the core areas that Clipper is aimed at in terms of research data management. So it will be interesting and useful to get reactions and feedback.

Slides are here as PDF

Slides are here on Slideshare

Demo handout instructions are here to download as a PDF

 

Clipper @ I Annotate 2016

This week John and Trevor are attending the I Annotate 2016 conference in Berlin here is a link to the PDF of our presentation. The last 4 slides describe the new technical architecture of Clipper. We think it will fit well with the world of annotating the web, we are very much looking forwards to finding out about this area of web development as it fits so well with our plans and we hope our conference colleagues will our work interesting / useful.

Clipper Reloaded: Phase 3 Begins

A quick note to say that we have been successful in getting funded for the 3rd phase of development as part of the Jisc Research Data Spring competition. This is a great achievement and we are looking forwards to getting on with the the next phase of work. Our proposals to the judges are here although our planned times will certainly change due to the delay in the decision making process (it was due before the end of 2015).

We will be considering our technical development options and are likely to change our technical architecture in this phase away from Javascript, PHP and SQL and adopt Angular 2, MongoDB, Noode JS. This is quite a big change – hence the ‘Reloaded’ tag –  and not without its risks but we think the benefits are very strong. We shall be quiet for the next few weeks as we dig into these issues.

Technical Standards / System Design Part 2: Looking Forwards to Phase 3

The current prototype Clipper application is built using these open Web standards

Moving forwards in phase 3 we envisage using / investigating these standards

Our aim from the beginning has been to create a toolkit that has little or no dependency on any proprietary and ‘closed’ technology or standards. Choosing the above standards was a good start. Moving forwards we shall need to create a more detailed data model. We had been aware of the W3C Annotation Data model: http://www.w3.org/TR/annotation-model/ and the W3C web annotation working group http://www.w3.org/annotation/.

From a research point of view the following 3 standards could provide the vital ‘glue’ to bind a Clipper installation or service into the global digital research ecosystem

  1. DOI: Digital Object Identifier System: In our discussions at the Roslin Institute we have identified the possible use of DOI’s to identify Cliplists, clips and annotations as well as the audio-visual resources they are linked to
  2. ORCID: Provides a way of linking annotations etc. to individual researchers
  3. OAI-PMH; Provides a useful way of sharing Cliplist information between repositories

As a result of our community engagement activities we have been fortunate in encountering Tom Crane and the Digirati company and in the ensuing discussions Tom has been suggesting that that these existing and emerging standards will be really worth exploring in Phase 3 and we think they look really promising:

Tom has pointed out that the IIIF Presentation API – http://iiif.io/api/presentation/2.1/ with its concept of an IIIF manifest is close to our idea of the project being the container for Cliplists etc. He has also suggested that the IIIF Shared Canvas: http://iiif.io/model/shared-canvas/1.0/index.html concept can be extended to time based media. With some time-based media vocabulary the IIIF work might be just what we need in Clipper. Tom is coming to the OU this Friday (27/11/15) to present the work of the IIIF and we hope to discuss this further with him then and make plans for phase 3.

Technical Standards / System Design Part 1: Reflections

We have been discussing the Clipper toolkit with people recently as part of our community consultation process. One interesting question we have been asked by the digital library / information community is what ‘Data Model’ are we using? To be honest we have not thought too much about this until now as we had done a fair bit on that previously around 2009. So, a bit of explanation here might help us to clarify our position going forwards.

In the earliest phase of Clipper (around 2009) we created it in Adobe Flash and ActionScript using the Adobe AIR rich ‘internet application’ to create a cross-platform app (PC and Mac that is). This was a little before the HTML5 take off and the rise of tablets and smart phones). In that earlier project we did a lot of thinking about the data flows involved in the user interacting with audio-visual resources and what data would need to be gathered by the system to deliver the functionality the user needed. You can find a set of graphic flowcharts representing the data flow at this link. At the time we were fortunate in working with a colleague at Manchester University (Gayle Calverley) who had just completed a study for Jisc on the types of metadata needed for the storage and management of time based media in repositories. The report that Gayle created was thorough and really useful it was called the “Time Based Media Application Profile”, and it is still on line:

http://wiki.manchester.ac.uk/tbmap/index.php/Main_Page

In the end we did not implement a detailed data model based on that study, instead we developed our own ‘slimline’ version based on user ‘walkthroughs’ of the system and ‘reverse engineering’ approaches to see what data would be required to deliver the functionality we needed. The metadata schema we came up with was based on Dublin Core. We produced our own report detailing our approach to metadata and, with Gayle’s help, mapped it to the Jisc TBMAP report. This approach certainly made our life a lot easier then and to extent it still does today, it is useful to reflect on this as we go forwards and I think we shall certainly be using this and Gayle’s report in Phase 3.